Earthquake prediction

Last updated

Earthquake prediction is a branch of the science of seismology concerned with the specification of the time, location, and magnitude of future earthquakes within stated limits, [1] [a] and particularly "the determination of parameters for the next strong earthquake to occur in a region". [2] Earthquake prediction is sometimes distinguished from earthquake forecasting , which can be defined as the probabilistic assessment of general earthquake hazard, including the frequency and magnitude of damaging earthquakes in a given area over years or decades. [3] [b]

Contents

Prediction can be further distinguished from earthquake warning systems, which, upon detection of an earthquake, provide a real-time warning of seconds to neighboring regions that might be affected.

In the 1970s, scientists were optimistic that a practical method for predicting earthquakes would soon be found, but by the 1990s continuing failure led many to question whether it was even possible. [4] Demonstrably successful predictions of large earthquakes have not occurred, and the few claims of success are controversial. For example, the most famous claim of a successful prediction is that alleged for the 1975 Haicheng earthquake. [5] A later study said that there was no valid short-term prediction. [6] Extensive searches have reported many possible earthquake precursors, but, so far, such precursors have not been reliably identified across significant spatial and temporal scales. [7] While part of the scientific community hold that, taking into account non-seismic precursors and given enough resources to study them extensively, prediction might be possible, most scientists are pessimistic and some maintain that earthquake prediction is inherently impossible. [8]

Evaluating earthquake predictions

Predictions are deemed significant if they can be shown to be successful beyond random chance. [9] Therefore, methods of statistical hypothesis testing are used to determine the probability that an earthquake such as is predicted would happen anyway (the null hypothesis). The predictions are then evaluated by testing whether they correlate with actual earthquakes better than the null hypothesis. [10]

In many instances, however, the statistical nature of earthquake occurrence is not simply homogeneous. Clustering occurs in both space and time. [11] In southern California about 6% of M≥3.0 earthquakes are "followed by an earthquake of larger magnitude within 5 days and 10 km." [12] In central Italy 9.5% of M≥3.0 earthquakes are followed by a larger event within 48 hours and 30 km. [13] While such statistics are not satisfactory for purposes of prediction (giving ten to twenty false alarms for each successful prediction) they will skew the results of any analysis that assumes that earthquakes occur randomly in time, for example, as realized from a Poisson process. It has been shown that a "naive" method based solely on clustering can successfully predict about 5% of earthquakes; "far better than 'chance'". [14]

The Dilemma: To Alarm? or Not to Alarm? It is assumed that the public is also warned, in addition to the authorities. Alarm dilemma.png
The Dilemma: To Alarm? or Not to Alarm? It is assumed that the public is also warned, in addition to the authorities.

As the purpose of short-term prediction is to enable emergency measures to reduce death and destruction, failure to give warning of a major earthquake, that does occur, or at least an adequate evaluation of the hazard, can result in legal liability, or even political purging. For example, it has been reported that members of the Chinese Academy of Sciences were purged for "having ignored scientific predictions of the disastrous Tangshan earthquake of summer 1976." [15] Following the 2009 L'Aquila Earthquake, seven scientists and technicians in Italy were convicted of manslaughter, but not so much for failing to predict the earthquake, where some 300 people died, as for giving undue assurance to the populace – one victim called it "anaesthetizing" – that there would not be a serious earthquake, and therefore no need to take precautions. [16] But warning of an earthquake that does not occur also incurs a cost: not only the cost of the emergency measures themselves, but of civil and economic disruption. [17] False alarms, including alarms that are canceled, also undermine the credibility, and thereby the effectiveness, of future warnings. [18] In 1999 it was reported [19] that China was introducing "tough regulations intended to stamp out 'false' earthquake warnings, in order to prevent panic and mass evacuation of cities triggered by forecasts of major tremors." This was prompted by "more than 30 unofficial earthquake warnings ... in the past three years, none of which has been accurate." [c] The acceptable trade-off between missed quakes and false alarms depends on the societal valuation of these outcomes. The rate of occurrence of both must be considered when evaluating any prediction method. [20]

In a 1997 study [21] of the cost-benefit ratio of earthquake prediction research in Greece, Stathis Stiros suggested that even a (hypothetical) excellent prediction method would be of questionable social utility, because "organized evacuation of urban centers is unlikely to be successfully accomplished", while "panic and other undesirable side-effects can also be anticipated." He found that earthquakes kill less than ten people per year in Greece (on average), and that most of those fatalities occurred in large buildings with identifiable structural issues. Therefore, Stiros stated that it would be much more cost-effective to focus efforts on identifying and upgrading unsafe buildings. Since the death toll on Greek highways is more than 2300 per year on average, he argued that more lives would also be saved if Greece's entire budget for earthquake prediction had been used for street and highway safety instead. [22]

Prediction methods

Earthquake prediction is an immature science it has not yet led to a successful prediction of an earthquake from first physical principles. Research into methods of prediction therefore focus on empirical analysis, with two general approaches: either identifying distinctive precursors to earthquakes, or identifying some kind of geophysical trend or pattern in seismicity that might precede a large earthquake. [23] Precursor methods are pursued largely because of their potential utility for short-term earthquake prediction or forecasting, while 'trend' methods are generally thought to be useful for forecasting, long term prediction (10 to 100 years time scale) or intermediate term prediction (1 to 10 years time scale). [24]

Precursors

An earthquake precursor is an anomalous phenomenon that might give effective warning of an impending earthquake. [d] Reports of these – though generally recognized as such only after the event – number in the thousands, [26] some dating back to antiquity. [27] There have been around 400 reports of possible precursors in scientific literature, of roughly twenty different types, [28] running the gamut from aeronomy to zoology. [29] None have been found to be reliable for the purposes of earthquake prediction. [30]

In the early 1990, the IASPEI solicited nominations for a Preliminary List of Significant Precursors. Forty nominations were made, of which five were selected as possible significant precursors, with two of those based on a single observation each. [31]

After a critical review of the scientific literature, the International Commission on Earthquake Forecasting for Civil Protection (ICEF) concluded in 2011 there was "considerable room for methodological improvements in this type of research." [32] In particular, many cases of reported precursors are contradictory, lack a measure of amplitude, or are generally unsuitable for a rigorous statistical evaluation. Published results are biased towards positive results, and so the rate of false negatives (earthquake but no precursory signal) is unclear. [33]

Animal behavior

After an earthquake has already begun, pressure waves (P waves) travel twice as fast as the more damaging shear waves (s waves). [34] Typically not noticed by humans, some animals may notice the smaller vibrations that arrive a few to a few dozen seconds before the main shaking, and become alarmed or exhibit other unusual behavior. [35] [36] Seismometers can also detect P waves, and the timing difference is exploited by electronic earthquake warning systems to provide humans with a few seconds to move to a safer location.

A review of scientific studies available as of 2018 covering over 130 species found insufficient evidence to show that animals could provide warning of earthquakes hours, days, or weeks in advance. [37] Statistical correlations suggest some reported unusual animal behavior is due to smaller earthquakes (foreshocks) that sometimes precede a large quake, [38] which if small enough may go unnoticed by people. [39] Foreshocks may also cause groundwater changes or release gases that can be detected by animals. [38] Foreshocks are also detected by seismometers, and have long been studied as potential predictors, but without success (see #Seismicity patterns). Seismologists have not found evidence of medium-term physical or chemical changes that predict earthquakes which animals might be sensing. [37]

Anecdotal reports of strange animal behavior before earthquakes have been recorded for thousands of years. [35] Some unusual animal behavior may be mistakenly attributed to a near-future earthquake. The flashbulb memory effect causes unremarkable details to become more memorable and more significant when associated with an emotionally powerful event such as an earthquake. [40] Even the vast majority of scientific reports in the 2018 review did not include observations showing that animals did not act unusually when there was not an earthquake about to happen, meaning the behavior was not established to be predictive. [38]

Most researchers investigating animal prediction of earthquakes are in China and Japan. [35] Most scientific observations have come from the 2010 Canterbury earthquake in New Zealand, the 1984 Nagano earthquake in Japan, and the 2009 L'Aquila earthquake in Italy. [38]

Animals known to be magnetoreceptive might be able to detect electromagnetic waves in the ultra low frequency and extremely low frequency ranges that reach the surface of the Earth before an earthquake, causing odd behavior. These electromagnetic waves could also cause air ionization, water oxidation and possible water toxification which other animals could detect. [41]

Dilatancy–diffusion

In the 1970s the dilatancy–diffusion hypothesis was highly regarded as providing a physical basis for various phenomena seen as possible earthquake precursors. [42] It was based on "solid and repeatable evidence" [43] from laboratory experiments that highly stressed crystalline rock experienced a change in volume, or dilatancy, [e] which causes changes in other characteristics, such as seismic velocity and electrical resistivity, and even large-scale uplifts of topography. It was believed this happened in a 'preparatory phase' just prior to the earthquake, and that suitable monitoring could therefore warn of an impending quake.

Detection of variations in the relative velocities of the primary and secondary seismic waves – expressed as Vp/Vs – as they passed through a certain zone was the basis for predicting the 1973 Blue Mountain Lake (NY) and 1974 Riverside (CA) quake. [45] Although these predictions were informal and even trivial, their apparent success was seen as confirmation of both dilatancy and the existence of a preparatory process, leading to what were subsequently called "wildly over-optimistic statements" [42] that successful earthquake prediction "appears to be on the verge of practical reality." [46]

However, many studies questioned these results, [47] and the hypothesis eventually languished. Subsequent study showed it "failed for several reasons, largely associated with the validity of the assumptions on which it was based", including the assumption that laboratory results can be scaled up to the real world. [48] Another factor was the bias of retrospective selection of criteria. [49] Other studies have shown dilatancy to be so negligible that Main et al. 2012 concluded: "The concept of a large-scale 'preparation zone' indicating the likely magnitude of a future event, remains as ethereal as the ether that went undetected in the Michelson–Morley experiment."

Changes in Vp/Vs

Vp is the symbol for the velocity of a seismic "P" (primary or pressure) wave passing through rock, while Vs is the symbol for the velocity of the "S" (secondary or shear) wave. Small-scale laboratory experiments have shown that the ratio of these two velocities – represented as Vp/Vs – changes when rock is near the point of fracturing. In the 1970s it was considered a likely breakthrough when Russian seismologists reported observing such changes (later discounted. [50] ) in the region of a subsequent earthquake. [51] This effect, as well as other possible precursors, has been attributed to dilatancy, where rock stressed to near its breaking point expands (dilates) slightly. [52]

Study of this phenomenon near Blue Mountain Lake in New York State led to a successful albeit informal prediction in 1973, [53] and it was credited for predicting the 1974 Riverside (CA) quake. [45] However, additional successes have not followed, and it has been suggested that these predictions were a fluke. [54] A Vp/Vs anomaly was the basis of a 1976 prediction of a M 5.5 to 6.5 earthquake near Los Angeles, which failed to occur. [55] Other studies relying on quarry blasts (more precise, and repeatable) found no such variations, [56] while an analysis of two earthquakes in California found that the variations reported were more likely caused by other factors, including retrospective selection of data. [57] Geller (1997) noted that reports of significant velocity changes have ceased since about 1980.

Radon emissions

Most rock contains small amounts of gases that can be isotopically distinguished from the normal atmospheric gases. There are reports of spikes in the concentrations of such gases prior to a major earthquake; this has been attributed to release due to pre-seismic stress or fracturing of the rock. One of these gases is radon, produced by radioactive decay of the trace amounts of uranium present in most rock. [58] Radon is potentially useful as an earthquake predictor because it is radioactive and thus easily detected, [f] and its short half-life (3.8 days) makes radon levels sensitive to short-term fluctuations.

A 2009 compilation [59] listed 125 reports of changes in radon emissions prior to 86 earthquakes since 1966. The International Commission on Earthquake Forecasting for Civil Protection (ICEF) however found in its 2011 critical review that the earthquakes with which these changes are supposedly linked were up to a thousand kilometers away, months later, and at all magnitudes. In some cases the anomalies were observed at a distant site, but not at closer sites. The ICEF found "no significant correlation". [60]

Electromagnetic anomalies

Observations of electromagnetic disturbances and their attribution to the earthquake failure process go back as far as the Great Lisbon earthquake of 1755, but practically all such observations prior to the mid-1960s are invalid because the instruments used were sensitive to physical movement. [61] Since then various anomalous electrical, electric-resistive, and magnetic phenomena have been attributed to precursory stress and strain changes that precede earthquakes, [62] raising hopes for finding a reliable earthquake precursor. [63] While a handful of researchers have gained much attention with either theories of how such phenomena might be generated, claims of having observed such phenomena prior to an earthquake, no such phenomena has been shown to be an actual precursor.

A 2011 review by the International Commission on Earthquake Forecasting for Civil Protection (ICEF) [64] found the "most convincing" electromagnetic precursors to be ultra low frequency magnetic anomalies, such as the Corralitos event (discussed below) recorded before the 1989 Loma Prieta earthquake. However, it is now believed that observation was a system malfunction. Study of the closely monitored 2004 Parkfield earthquake found no evidence of precursory electromagnetic signals of any type; further study showed that earthquakes with magnitudes less than 5 do not produce significant transient signals. [65] The ICEF considered the search for useful precursors to have been unsuccessful. [66]

VAN seismic electric signals

The most touted, and most criticized, claim of an electromagnetic precursor is the VAN method of physics professors Panayiotis Varotsos, Kessar Alexopoulos and Konstantine Nomicos (VAN) of the University of Athens. In a 1981 paper [67] they claimed that by measuring geoelectric voltages – what they called "seismic electric signals" (SES) – they could predict earthquakes. [g]

In 1984, they claimed there was a "one-to-one correspondence" between SES and earthquakes [68] – that is, that "every sizable EQ is preceded by an SES and inversely every SES is always followed by an EQ the magnitude and the epicenter of which can be reliably predicted" [69] – the SES appearing between 6 and 115 hours before the earthquake. As proof of their method they claimed a series of successful predictions. [70]

Although their report was "saluted by some as a major breakthrough", [h] among seismologists it was greeted by a "wave of generalized skepticism". [72] In 1996, a paper VAN submitted to the journal Geophysical Research Letters was given an unprecedented public peer-review by a broad group of reviewers, with the paper and reviews published in a special issue; [73] the majority of reviewers found the methods of VAN to be flawed. Additional criticism was raised the same year in a public debate between some of the principals. [74] [i]

A primary criticism was that the method is geophysically implausible and scientifically unsound. [76] Additional objections included the demonstrable falsity of the claimed one-to-one relationship of earthquakes and SES, [77] the unlikelihood of a precursory process generating signals stronger than any observed from the actual earthquakes, [78] and the very strong likelihood that the signals were man-made. [79] [j] Further work in Greece has tracked SES-like "anomalous transient electric signals" back to specific human sources, and found that such signals are not excluded by the criteria used by VAN to identify SES. [81] More recent work, by employing modern methods of statistical physics, i.e., detrended fluctuation analysis (DFA), multifractal DFA and wavelet transform revealed that SES are clearly distinguished from signals produced by man made sources. [82] [83]

The validity of the VAN method, and therefore the predictive significance of SES, was based primarily on the empirical claim of demonstrated predictive success. [84] Numerous weaknesses have been uncovered in the VAN methodology, [k] and in 2011 the International Commission on Earthquake Forecasting for Civil Protection concluded that the prediction capability claimed by VAN could not be validated. [85] Most seismologists consider VAN to have been "resoundingly debunked". [86] On the other hand, the Section "Earthquake Precursors and Prediction" of "Encyclopedia of Solid Earth Geophysics: part of "Encyclopedia of Earth Sciences Series" (Springer 2011) ends as follows (just before its summary): "it has recently been shown that by analyzing time-series in a newly introduced time domain "natural time", the approach to the critical state can be clearly identified [Sarlis et al. 2008]. This way, they appear to have succeeded in shortening the lead-time of VAN prediction to only a few days [Uyeda and Kamogawa 2008]. This means, seismic data may play an amazing role in short term precursor when combined with SES data". [87]

Since 2001, the VAN group has introduced a concept they call "natural time", applied to the analysis of their precursors. Initially it is applied on SES to distinguish them from noise and relate them to a possible impending earthquake. In case of verification (classification as "SES activity"), natural time analysis is additionally applied to the general subsequent seismicity of the area associated with the SES activity, in order to improve the time parameter of the prediction. The method treats earthquake onset as a critical phenomenon. [88] [89] A review of the updated VAN method in 2020 says that it suffers from an abundance of false positives and is therefore not usable as a prediction protocol. [90] VAN group answered by pinpointing misunderstandings in the specific reasoning. [91]

Corralitos anomaly

Probably the most celebrated seismo-electromagnetic event ever, and one of the most frequently cited examples of a possible earthquake precursor, is the 1989 Corralitos anomaly. [92] In the month prior to the 1989 Loma Prieta earthquake, measurements of the Earth's magnetic field at ultra-low frequencies by a magnetometer in Corralitos, California, just 7 km from the epicenter of the impending earthquake, started showing anomalous increases in amplitude. Just three hours before the quake, the measurements soared to about thirty times greater than normal, with amplitudes tapering off after the quake. Such amplitudes had not been seen in two years of operation, nor in a similar instrument located 54 km away. To many people such apparent locality in time and space suggested an association with the earthquake. [93]

Additional magnetometers were subsequently deployed across northern and southern California, but after ten years and several large earthquakes, similar signals have not been observed. More recent studies have cast doubt on the connection, attributing the Corralitos signals to either unrelated magnetic disturbance [94] or, even more simply, to sensor-system malfunction. [95]

Freund physics

In his investigations of crystalline physics, Friedemann Freund found that water molecules embedded in rock can dissociate into ions if the rock is under intense stress. The resulting charge carriers can generate battery currents under certain conditions. Freund suggested that perhaps these currents could be responsible for earthquake precursors such as electromagnetic radiation, earthquake lights and disturbances of the plasma in the ionosphere. [96] The study of such currents and interactions is known as "Freund physics". [97] [98] [99]

Most seismologists reject Freund's suggestion that stress-generated signals can be detected and put to use as precursors, for a number of reasons. First, it is believed that stress does not accumulate rapidly before a major earthquake, and thus there is no reason to expect large currents to be rapidly generated. Secondly, seismologists have extensively searched for statistically reliable electrical precursors, using sophisticated instrumentation, and have not identified any such precursors. And thirdly, water in the Earth's crust would cause any generated currents to be absorbed before reaching the surface. [100]

Disturbance of the daily cycle of the ionosphere
The ULF* recording of the D layer retention of the ionosphere which absorbs EM radiation during the nights before the earthquake in L'Aquila, Italy on 6/4/2009. The anomaly is indicated in red. LAQUILA 2009 ULF.JPG
The ULF* recording of the D layer retention of the ionosphere which absorbs EM radiation during the nights before the earthquake in L'Aquila, Italy on 6/4/2009. The anomaly is indicated in red.

The ionosphere usually develops its lower D layer during the day, while at night this layer disappears as the plasma there turns to gas. During the night, the F layer of the ionosphere remains formed, in higher altitude than D layer. A waveguide for low HF radio frequencies up to 10 MHz is formed during the night (skywave propagation) as the F layer reflects these waves back to the Earth. The skywave is lost during the day, as the D layer absorbs these waves.

Tectonic stresses in the Earth's crust are claimed to cause waves of electric charges [101] [102] that travel to the surface of the Earth and affect the ionosphere. [103] ULF* recordings [l] of the daily cycle of the ionosphere indicate that the usual cycle could be disturbed a few days before a shallow strong earthquake. When the disturbance occurs, it is observed that either the D layer is lost during the day resulting to ionosphere elevation and skywave formation or the D layer appears at night resulting to lower of the ionosphere and hence absence of skywave. [104] [105] [106]

Science centers have developed a network of VLF transmitters and receivers on a global scale that detect changes in skywave. Each receiver is also daisy transmitter for distances of 1000–10,000 kilometers and is operating at different frequencies within the network. The general area under excitation can be determined depending on the density of the network. [107] [108] It was shown on the other hand that global extreme events like magnetic storms or solar flares and local extreme events in the same VLF path like another earthquake or a volcano eruption that occur in near time with the earthquake under evaluation make it difficult or impossible to relate changes in skywave to the earthquake of interest. [109]

In 2017, an article in the Journal of Geophysical Research showed that the relationship between ionospheric anomalies and large seismic events (M≥6.0) occurring globally from 2000 to 2014 was based on the presence of solar weather. When the solar data are removed from the time series, the correlation is no longer statistically significant. [110] A subsequent article in Physics of the Earth and Planetary Interiors in 2020 shows that solar weather and ionospheric disturbances are a potential cause to trigger large earthquakes based on this statistical relationship. The proposed mechanism is electromagnetic induction from the ionosphere to the fault zone. Fault fluids are conductive, and can produce telluric currents at depth. The resulting change in the local magnetic field in the fault triggers dissolution of minerals and weakens the rock, while also potentially changing the groundwater chemistry and level. After the seismic event, different minerals may be precipitated thus changing groundwater chemistry and level again. [90] This process of mineral dissolution and precipitation before and after an earthquake has been observed in Iceland. [111] This model makes sense of the ionospheric, seismic and groundwater data.

Satellite observation of the expected ground temperature declination
The thermal night recording on January 6, 21 and 28, 2001 in the Gujarat region of India. Marked with an asterisk is the epicenter of the Bhuj earthquake on January 26 that was of 7.9 magnitude. The intermediate recording reveals a thermal anomaly on January 21 which is shown in red. In the next recording, 2 days after the earthquake, the thermal anomaly has disappeared. Main india night Jan 06-21-28 01.gif
The thermal night recording on January 6, 21 and 28, 2001 in the Gujarat region of India. Marked with an asterisk is the epicenter of the Bhuj earthquake on January 26 that was of 7.9 magnitude. The intermediate recording reveals a thermal anomaly on January 21 which is shown in red. In the next recording, 2 days after the earthquake, the thermal anomaly has disappeared.

One way of detecting the mobility of tectonic stresses is to detect locally elevated temperatures on the surface of the crust measured by satellites. During the evaluation process, the background of daily variation and noise due to atmospheric disturbances and human activities are removed before visualizing the concentration of trends in the wider area of a fault. This method has been experimentally applied since 1995. [112] [113] [114] [115]

In a newer approach to explain the phenomenon, NASA's Friedmann Freund has proposed that the infrared radiation captured by the satellites is not due to a real increase in the surface temperature of the crust. According to this version the emission is a result of the quantum excitation that occurs at the chemical re-bonding of positive charge carriers (holes) which are traveling from the deepest layers to the surface of the crust at a speed of 200 meters per second. The electric charge arises as a result of increasing tectonic stresses as the time of the earthquake approaches. This emission extends superficially up to 500 x 500 square kilometers for very large events and stops almost immediately after the earthquake. [116]

Instead of watching for anomalous phenomena that might be precursory signs of an impending earthquake, other approaches to predicting earthquakes look for trends or patterns that lead to an earthquake. As these trends may be complex and involve many variables, advanced statistical techniques are often needed to understand them, therefore these are sometimes called statistical methods. These approaches also tend to be more probabilistic, and to have larger time periods, and so merge into earthquake forecasting.[ citation needed ]

Nowcasting

Earthquake nowcasting, suggested in 2016 [117] [118] is the estimate of the current dynamic state of a seismological system, based on natural time introduced in 2001. [119] It differs from forecasting which aims to estimate the probability of a future event [120] but it is also considered a potential base for forecasting. [117] [121] Nowcasting calculations produce the "earthquake potential score", an estimation of the current level of seismic progress. [122] Typical applications are: great global earthquakes and tsunamis, [123] aftershocks and induced seismicity, [121] [124] induced seismicity at gas fields, [125] seismic risk to global megacities, [120] studying of clustering of large global earthquakes, [126] etc.

Elastic rebound

Even the stiffest of rock is not perfectly rigid. Given a large force (such as between two immense tectonic plates moving past each other) the Earth's crust will bend or deform. According to the elastic rebound theory of Reid (1910), eventually the deformation (strain) becomes great enough that something breaks, usually at an existing fault. Slippage along the break (an earthquake) allows the rock on each side to rebound to a less deformed state. In the process energy is released in various forms, including seismic waves. [127] The cycle of tectonic force being accumulated in elastic deformation and released in a sudden rebound is then repeated. As the displacement from a single earthquake ranges from less than a meter to around 10 meters (for an M 8 quake), [128] the demonstrated existence of large strike-slip displacements of hundreds of miles shows the existence of a long running earthquake cycle. [129] [m]

Characteristic earthquakes

The most studied earthquake faults (such as the Nankai megathrust, the Wasatch Fault, and the San Andreas Fault) appear to have distinct segments. The characteristic earthquake model postulates that earthquakes are generally constrained within these segments. [130] As the lengths and other properties [n] of the segments are fixed, earthquakes that rupture the entire fault should have similar characteristics. These include the maximum magnitude (which is limited by the length of the rupture), and the amount of accumulated strain needed to rupture the fault segment. Since continuous plate motions cause the strain to accumulate steadily, seismic activity on a given segment should be dominated by earthquakes of similar characteristics that recur at somewhat regular intervals. [131] For a given fault segment, identifying these characteristic earthquakes and timing their recurrence rate (or conversely return period) should therefore inform us about the next rupture; this is the approach generally used in forecasting seismic hazard. UCERF3 is a notable example of such a forecast, prepared for the state of California. [132] Return periods are also used for forecasting other rare events, such as cyclones and floods, and assume that future frequency will be similar to observed frequency to date.

The idea of characteristic earthquakes was the basis of the Parkfield prediction: fairly similar earthquakes in 1857, 1881, 1901, 1922, 1934, and 1966 suggested a pattern of breaks every 21.9 years, with a standard deviation of ±3.1 years. [133] [o] Extrapolation from the 1966 event led to a prediction of an earthquake around 1988, or before 1993 at the latest (at the 95% confidence interval). [134] The appeal of such a method is that the prediction is derived entirely from the trend, which supposedly accounts for the unknown and possibly unknowable earthquake physics and fault parameters. However, in the Parkfield case the predicted earthquake did not occur until 2004, a decade late. This seriously undercuts the claim that earthquakes at Parkfield are quasi-periodic, and suggests the individual events differ sufficiently in other respects to question whether they have distinct characteristics in common. [135]

The failure of the Parkfield prediction has raised doubt as to the validity of the characteristic earthquake model itself. [136] Some studies have questioned the various assumptions, including the key one that earthquakes are constrained within segments, and suggested that the "characteristic earthquakes" may be an artifact of selection bias and the shortness of seismological records (relative to earthquake cycles). [137] Other studies have considered whether other factors need to be considered, such as the age of the fault. [p] Whether earthquake ruptures are more generally constrained within a segment (as is often seen), or break past segment boundaries (also seen), has a direct bearing on the degree of earthquake hazard: earthquakes are larger where multiple segments break, but in relieving more strain they will happen less often. [139]

Seismic gaps

At the contact where two tectonic plates slip past each other every section must eventually slip, as (in the long-term) none get left behind. But they do not all slip at the same time; different sections will be at different stages in the cycle of strain (deformation) accumulation and sudden rebound. In the seismic gap model the "next big quake" should be expected not in the segments where recent seismicity has relieved the strain, but in the intervening gaps where the unrelieved strain is the greatest. [140] This model has an intuitive appeal; it is used in long-term forecasting, and was the basis of a series of circum-Pacific (Pacific Rim) forecasts in 1979 and 1989–1991. [141]

However, some underlying assumptions about seismic gaps are now known to be incorrect. A close examination suggests that "there may be no information in seismic gaps about the time of occurrence or the magnitude of the next large event in the region"; [142] statistical tests of the circum-Pacific forecasts shows that the seismic gap model "did not forecast large earthquakes well". [143] Another study concluded that a long quiet period did not increase earthquake potential. [144]

Seismicity patterns

Various heuristically derived algorithms have been developed for predicting earthquakes. Probably the most widely known is the M8 family of algorithms (including the RTP method) developed under the leadership of Vladimir Keilis-Borok. M8 issues a "Time of Increased Probability" (TIP) alarm for a large earthquake of a specified magnitude upon observing certain patterns of smaller earthquakes. TIPs generally cover large areas (up to a thousand kilometers across) for up to five years. [145] Such large parameters have made M8 controversial, as it is hard to determine whether any hits that happened were skillfully predicted, or only the result of chance.

M8 gained considerable attention when the 2003 San Simeon and Hokkaido earthquakes occurred within a TIP. [146] In 1999, Keilis-Borok's group published a claim to have achieved statistically significant intermediate-term results using their M8 and MSc models, as far as world-wide large earthquakes are regarded. [147] However, Geller et al. [148] are skeptical of prediction claims over any period shorter than 30 years. A widely publicized TIP for an M 6.4 quake in Southern California in 2004 was not fulfilled, nor two other lesser known TIPs. [149] A deep study of the RTP method in 2008 found that out of some twenty alarms only two could be considered hits (and one of those had a 60% chance of happening anyway). [150] It concluded that "RTP is not significantly different from a naïve method of guessing based on the historical rates [of] seismicity." [151]

Accelerating moment release (AMR, "moment" being a measurement of seismic energy), also known as time-to-failure analysis, or accelerating seismic moment release (ASMR), is based on observations that foreshock activity prior to a major earthquake not only increased, but increased at an exponential rate. [152] In other words, a plot of the cumulative number of foreshocks gets steeper just before the main shock.

Following formulation by Bowman et al. (1998) into a testable hypothesis, [153] and a number of positive reports, AMR seemed promising [154] despite several problems. Known issues included not being detected for all locations and events, and the difficulty of projecting an accurate occurrence time when the tail end of the curve gets steep. [155] But rigorous testing has shown that apparent AMR trends likely result from how data fitting is done, [156] and failing to account for spatiotemporal clustering of earthquakes. [157] The AMR trends are therefore statistically insignificant. Interest in AMR (as judged by the number of peer-reviewed papers) has fallen off since 2004. [158]

Machine learning

Rouet-Leduc et al. (2019) reported having successfully trained a regression random forest on acoustic time series data capable of identifying a signal emitted from fault zones that forecasts fault failure. Rouet-Leduc et al. (2019) suggested that the identified signal, previously assumed to be statistical noise, reflects the increasing emission of energy before its sudden release during a slip event. Rouet-Leduc et al. (2019) further postulated that their approach could bound fault failure times and lead to the identification of other unknown signals. [159] Due to the rarity of the most catastrophic earthquakes, acquiring representative data remains problematic. In response, Rouet-Leduc et al. (2019) have conjectured that their model would not need to train on data from catastrophic earthquakes, since further research has shown the seismic patterns of interest to be similar in smaller earthquakes. [160]

Deep learning has also been applied to earthquake prediction. Although Bath's law and Omori's law describe the magnitude of earthquake aftershocks and their time-varying properties, the prediction of the "spatial distribution of aftershocks" remains an open research problem. Using the Theano and TensorFlow software libraries, DeVries et al. (2018) trained a neural network that achieved higher accuracy in the prediction of spatial distributions of earthquake aftershocks than the previously established methodology of Coulomb failure stress change. Notably, DeVries et al. (2018) reported that their model made no "assumptions about receiver plane orientation or geometry" and heavily weighted the change in shear stress, "sum of the absolute values of the independent components of the stress-change tensor," and the von Mises yield criterion. DeVries et al. (2018) postulated that the reliance of their model on these physical quantities indicated that they might "control earthquake triggering during the most active part of the seismic cycle." For validation testing, DeVries et al. (2018) reserved 10% of positive training earthquake data samples and an equal quantity of randomly chosen negative samples. [161]

Arnaud Mignan and Marco Broccardo have similarly analyzed the application of artificial neural networks to earthquake prediction. They found in a review of literature that earthquake prediction research utilizing artificial neural networks has gravitated towards more sophisticated models amidst increased interest in the area. They also found that neural networks utilized in earthquake prediction with notable success rates were matched in performance by simpler models. They further addressed the issues of acquiring appropriate data for training neural networks to predict earthquakes, writing that the "structured, tabulated nature of earthquake catalogues" makes transparent machine learning models more desirable than artificial neural networks. [162]

EMP induced seismicity

High energy electromagnetic pulses can induce earthquakes within 2–6 days after the emission by EMP generators. [163] It has been proposed that strong EM impacts could control seismicity, as the seismicity dynamics that follow appear to be a lot more regular than usual. [164] [165]

Notable predictions

These are predictions, or claims of predictions, that are notable either scientifically or because of public notoriety, and claim a scientific or quasi-scientific basis. As many predictions are held confidentially, or published in obscure locations, and become notable only when they are claimed, there may be a selection bias in that hits get more attention than misses. The predictions listed here are discussed in Hough's book [50] and Geller's paper. [166]

1975: Haicheng, China

The M 7.3 1975 Haicheng earthquake is the most widely cited "success" of earthquake prediction. [167] The ostensible story is that study of seismic activity in the region led the Chinese authorities to issue a medium-term prediction in June 1974, and the political authorities therefore ordered various measures taken, including enforced evacuation of homes, construction of "simple outdoor structures", and showing of movies out-of-doors. The quake, striking at 19:36, was powerful enough to destroy or badly damage about half of the homes. However, the "effective preventative measures taken" were said to have kept the death toll under 300 in an area with population of about 1.6 million, where otherwise tens of thousands of fatalities might have been expected. [168]

However, although a major earthquake occurred, there has been some skepticism about the narrative of measures taken on the basis of a timely prediction. This event occurred during the Cultural Revolution, when "belief in earthquake prediction was made an element of ideological orthodoxy that distinguished the true party liners from right wing deviationists". [169] Recordkeeping was disordered, making it difficult to verify details, including whether there was any ordered evacuation. The method used for either the medium-term or short-term predictions (other than "Chairman Mao's revolutionary line" [170] ) has not been specified. [q] The evacuation may have been spontaneous, following the strong (M 4.7) foreshock that occurred the day before. [172] [r]

A 2006 study that had access to an extensive range of records found that the predictions were flawed. "In particular, there was no official short-term prediction, although such a prediction was made by individual scientists." [173] Also: "it was the foreshocks alone that triggered the final decisions of warning and evacuation". They estimated that 2,041 lives were lost. That more did not die was attributed to a number of fortuitous circumstances, including earthquake education in the previous months (prompted by elevated seismic activity), local initiative, timing (occurring when people were neither working nor asleep), and local style of construction. The authors conclude that, while unsatisfactory as a prediction, "it was an attempt to predict a major earthquake that for the first time did not end up with practical failure." [173]

1981: Lima, Peru (Brady)

In 1976, Brian Brady, a physicist, then at the U.S. Bureau of Mines, where he had studied how rocks fracture, "concluded a series of four articles on the theory of earthquakes with the deduction that strain building in the subduction zone [off-shore of Peru] might result in an earthquake of large magnitude within a period of seven to fourteen years from mid November 1974." [174] In an internal memo written in June 1978 he narrowed the time window to "October to November, 1981", with a main shock in the range of 9.2±0.2. [175] In a 1980 memo he was reported as specifying "mid-September 1980". [176] This was discussed at a scientific seminar in San Juan, Argentina, in October 1980, where Brady's colleague, W. Spence, presented a paper. Brady and Spence then met with government officials from the U.S. and Peru on 29 October, and "forecast a series of large magnitude earthquakes in the second half of 1981." [174] This prediction became widely known in Peru, following what the U.S. embassy described as "sensational first page headlines carried in most Lima dailies" on January 26, 1981. [177]

On 27 January 1981, after reviewing the Brady-Spence prediction, the U.S. National Earthquake Prediction Evaluation Council (NEPEC) announced it was "unconvinced of the scientific validity" of the prediction, and had been "shown nothing in the observed seismicity data, or in the theory insofar as presented, that lends substance to the predicted times, locations, and magnitudes of the earthquakes." It went on to say that while there was a probability of major earthquakes at the predicted times, that probability was low, and recommend that "the prediction not be given serious consideration." [178]

Unfazed, [s] Brady subsequently revised his forecast, stating there would be at least three earthquakes on or about July 6, August 18 and September 24, 1981, [180] leading one USGS official to complain: "If he is allowed to continue to play this game ... he will eventually get a hit and his theories will be considered valid by many." [181]

On June 28 (the date most widely taken as the date of the first predicted earthquake), it was reported that: "the population of Lima passed a quiet Sunday". [182] The headline on one Peruvian newspaper: "NO PASÓ NADA" ("Nothing happened"). [183]

In July Brady formally withdrew his prediction on the grounds that prerequisite seismic activity had not occurred. [184] Economic losses due to reduced tourism during this episode has been roughly estimated at one hundred million dollars. [185]

1985–1993: Parkfield, U.S. (Bakun-Lindh)

The "Parkfield earthquake prediction experiment" was the most heralded scientific earthquake prediction ever. [186] [t] It was based on an observation that the Parkfield segment of the San Andreas Fault [u] breaks regularly with a moderate earthquake of about M 6 every several decades: 1857, 1881, 1901, 1922, 1934, and 1966. [187] More particularly, Bakun & Lindh (1985) pointed out that, if the 1934 quake is excluded, these occur every 22 years, ±4.3 years. Counting from 1966, they predicted a 95% chance that the next earthquake would hit around 1988, or 1993 at the latest. The National Earthquake Prediction Evaluation Council (NEPEC) evaluated this, and concurred. [188] The U.S. Geological Survey and the State of California therefore established one of the "most sophisticated and densest nets of monitoring instruments in the world", [189] in part to identify any precursors when the quake came. Confidence was high enough that detailed plans were made for alerting emergency authorities if there were signs an earthquake was imminent. [190] In the words of The Economist : "never has an ambush been more carefully laid for such an event." [191]

Year 1993 came, and passed, without fulfillment. Eventually there was an M 6.0 earthquake on the Parkfield segment of the fault, on 28 September 2004, but without forewarning or obvious precursors. [192] While the experiment in catching an earthquake is considered by many scientists to have been successful, [193] the prediction was unsuccessful in that the eventual event was a decade late. [v]

1983–1995: Greece (VAN)

In 1981, the "VAN" group, headed by Panayiotis Varotsos, said that they found a relationship between earthquakes and 'seismic electric signals' (SES). In 1984 they presented a table of 23 earthquakes from 19 January 1983 to 19 September 1983, of which they claimed to have successfully predicted 18 earthquakes. [195] Other lists followed, such as their 1991 claim of predicting six out of seven earthquakes with Ms ≥ 5.5 in the period of 1 April 1987 through 10 August 1989, or five out of seven earthquakes with Ms ≥ 5.3 in the overlapping period of 15 May 1988 to 10 August 1989, [w] In 1996 they published a "Summary of all Predictions issued from January 1st, 1987 to June 15, 1995", [196] amounting to 94 predictions. [197] Matching this against a list of "All earthquakes with MS(ATH)" [198] [x] and within geographical bounds including most of Greece, [y] they come up with a list of 14 earthquakes they should have predicted. Here they claim ten successes, for a success rate of 70%. [200] [z]

The VAN predictions have been criticized on various grounds, including being geophysically implausible, [201] "vague and ambiguous", [202] failing to satisfy prediction criteria, [203] and retroactive adjustment of parameters. [204] A critical review of 14 cases where VAN claimed 10 successes showed only one case where an earthquake occurred within the prediction parameters. [205] The VAN predictions not only fail to do better than chance, but show "a much better association with the events which occurred before them", according to Mulargia and Gasperini. [206] Other early reviews found that the VAN results, when evaluated by definite parameters, were statistically significant. [207] [208] Both positive and negative views on VAN predictions from this period were summarized in the 1996 book A Critical Review of VAN edited by Sir James Lighthill [209] and in a debate issue presented by the journal Geophysical Research Letters that was focused on the statistical significance of the VAN method. [210] VAN had the opportunity to reply to their critics in those review publications. [211] In 2011, the ICEF reviewed the 1996 debate, and concluded that the optimistic SES prediction capability claimed by VAN could not be validated. [85] In 2013, the SES activities were found [212] to be coincident with the minima of the fluctuations of the order parameter of seismicity, which have been shown [213] to be statistically significant precursors by employing the event coincidence analysis. [214]

A crucial issue is the large and often indeterminate parameters of the predictions, [215] such that some critics say these are not predictions, and should not be recognized as such. [216] Much of the controversy with VAN arises from this failure to adequately specify these parameters. Some of their telegrams include predictions of two distinct earthquake events, such as (typically) one earthquake predicted at 300 km "NW" of Athens, and another at 240 km "W", "with magnitutes[sic] 5,3 and 5,8", with no time limit. [217] [aa] The time parameter estimation was introduced in VAN Method by means of natural time in 2001. [87] VAN has disputed the 'pessimistic' conclusions of their critics, but the critics have not relented. [218] It was suggested that VAN failed to account for clustering of earthquakes, [204] or that they interpreted their data differently during periods of greater seismic activity. [219]

VAN has been criticized on several occasions for causing public panic and widespread unrest. [220] This has been exacerbated by the broadness of their predictions, which cover large areas of Greece (up to 240 kilometers across, and often pairs of areas), [ab] much larger than the areas actually affected by earthquakes of the magnitudes predicted (usually several tens of kilometers across). [221] [ac] Magnitudes are similarly broad: a predicted magnitude of "6.0" represents a range from a benign magnitude 5.3 to a broadly destructive 6.7. [ad] Coupled with indeterminate time windows of a month or more, [222] such predictions "cannot be practically utilized" [223] to determine an appropriate level of preparedness, whether to curtail usual societal functioning, or even to issue public warnings. [ae]

2008: Greece (VAN)

After 2006, VAN claim that all alarms related to SES activity have been made public by posting at arxiv.org. Such SES activity is evaluated using a new method they call 'natural time'. One such report was posted on Feb. 1, 2008, two weeks before the strongest earthquake in Greece during the period 1983–2011. This earthquake occurred on February 14, 2008, with magnitude (Mw) 6.9. VAN's report was also described in an article in the newspaper Ethnos on Feb. 10, 2008. [225] However, Gerassimos Papadopoulos commented that the VAN reports were confusing and ambiguous, and that "none of the claims for successful VAN predictions is justified." [226] A reply to this comment, which insisted on the prediction's accuracy, was published in the same issue. [227]

1989: Loma Prieta, U.S.

The 1989 Loma Prieta earthquake (epicenter in the Santa Cruz Mountains northwest of San Juan Bautista, California) caused significant damage in the San Francisco Bay Area of California. [228] The United States Geological Survey (USGS) reportedly claimed, twelve hours after the event, that it had "forecast" this earthquake in a report the previous year. [229] USGS staff subsequently claimed this quake had been "anticipated"; [230] various other claims of prediction have also been made. [231]

Ruth Harris (Harris (1998)) reviewed 18 papers (with 26 forecasts) dating from 1910 "that variously offer or relate to scientific forecasts of the 1989 Loma Prieta earthquake." (In this case no distinction is made between a forecast, which is limited to a probabilistic estimate of an earthquake happening over some time period, and a more specific prediction. [232] ) None of these forecasts can be rigorously tested due to lack of specificity, [233] and where a forecast does bracket the correct time and location, the window was so broad (e.g., covering the greater part of California for five years) as to lose any value as a prediction. Predictions that came close (but given a probability of only 30%) had ten- or twenty-year windows. [234]

One debated prediction came from the M8 algorithm used by Keilis-Borok and associates in four forecasts. [235] The first of these forecasts missed both magnitude (M 7.5) and time (a five-year window from 1 January 1984, to 31 December 1988). They did get the location, by including most of California and half of Nevada. [236] A subsequent revision, presented to the NEPEC, extended the time window to 1 July 1992, and reduced the location to only central California; the magnitude remained the same. A figure they presented had two more revisions, for M ≥ 7.0 quakes in central California. The five-year time window for one ended in July 1989, and so missed the Loma Prieta event; the second revision extended to 1990, and so included Loma Prieta. [237]

When discussing success or failure of prediction for the Loma Prieta earthquake, some scientists argue that it did not occur on the San Andreas Fault (the focus of most of the forecasts), and involved dip-slip (vertical) movement rather than strike-slip (horizontal) movement, and so was not predicted. [238]

Other scientists argue that it did occur in the San Andreas Fault zone, and released much of the strain accumulated since the 1906 San Francisco earthquake; therefore several of the forecasts were correct. [239] Hough states that "most seismologists" do not believe this quake was predicted "per se". [240] In a strict sense there were no predictions, only forecasts, which were only partially successful.

Iben Browning claimed to have predicted the Loma Prieta event, but (as will be seen in the next section) this claim has been rejected.

1990: New Madrid, U.S. (Browning)

Iben Browning (a scientist with a Ph.D. degree in zoology and training as a biophysicist, but no experience in geology, geophysics, or seismology) was an "independent business consultant" who forecast long-term climate trends for businesses. [af] He supported the idea (scientifically unproven) that volcanoes and earthquakes are more likely to be triggered when the tidal force of the Sun and the Moon coincide to exert maximum stress on the Earth's crust (syzygy). [ag] Having calculated when these tidal forces maximize, Browning then "projected" [242] what areas were most at risk for a large earthquake. An area he mentioned frequently was the New Madrid seismic zone at the southeast corner of the state of Missouri, the site of three very large earthquakes in 1811–12, which he coupled with the date of 3 December 1990.

Browning's reputation and perceived credibility were boosted when he claimed in various promotional flyers and advertisements to have predicted (among various other events [ah] ) the Loma Prieta earthquake of 17 October 1989. [244] The National Earthquake Prediction Evaluation Council (NEPEC) formed an Ad Hoc Working Group (AHWG) to evaluate Browning's prediction. Its report (issued 18 October 1990) specifically rejected the claim of a successful prediction of the Loma Prieta earthquake. [245] A transcript of his talk in San Francisco on 10 October showed he had said: "there will probably be several earthquakes around the world, Richter 6+, and there may be a volcano or two" – which, on a global scale, is about average for a week – with no mention of any earthquake in California. [246]

Though the AHWG report disproved both Browning's claims of prior success and the basis of his "projection", it made little impact after a year of continued claims of a successful prediction. Browning's prediction received the support of geophysicist David Stewart, [ai] and the tacit endorsement of many public authorities in their preparations for a major disaster, all of which was amplified by massive exposure in the news media. [249] Nothing happened on 3 December, [250] and Browning died of a heart attack seven months later. [251]

2004 and 2005: Southern California, U.S. (Keilis-Borok)

The M8 algorithm (developed under the leadership of Vladimir Keilis-Borok at UCLA) gained respect by the apparently successful predictions of the 2003 San Simeon and Hokkaido earthquakes. [252] Great interest was therefore generated by the prediction in early 2004 of a M ≥ 6.4 earthquake to occur somewhere within an area of southern California of approximately 12,000 sq. miles, on or before 5 September 2004. [146] In evaluating this prediction the California Earthquake Prediction Evaluation Council (CEPEC) noted that this method had not yet made enough predictions for statistical validation, and was sensitive to input assumptions. It therefore concluded that no "special public policy actions" were warranted, though it reminded all Californians "of the significant seismic hazards throughout the state." [146] The predicted earthquake did not occur.

A very similar prediction was made for an earthquake on or before 14 August 2005, in approximately the same area of southern California. The CEPEC's evaluation and recommendation were essentially the same, this time noting that the previous prediction and two others had not been fulfilled. [253] This prediction also failed.

2009: L'Aquila, Italy (Giuliani)

At 03:32 on 6 April 2009, the Abruzzo region of central Italy was rocked by a magnitude M 6.3 earthquake. [254] In the city of L'Aquila and surrounding area around 60,000 buildings collapsed or were seriously damaged, resulting in 308 deaths and 67,500 people left homeless. [255] Around the same time, it was reported that Giampaolo Giuliani had predicted the earthquake, had tried to warn the public, but had been muzzled by the Italian government. [256]

Giampaolo Giuliani was a laboratory technician at the Laboratori Nazionali del Gran Sasso. As a hobby he had for some years been monitoring radon using instruments he had designed and built. Prior to the L'Aquila earthquake he was unknown to the scientific community, and had not published any scientific work. [257] He had been interviewed on 24 March by an Italian-language blog, Donne Democratiche, about a swarm of low-level earthquakes in the Abruzzo region that had started the previous December. He said that this swarm was normal and would diminish by the end of March. On 30 March, L'Aquila was struck by a magnitude 4.0 temblor, the largest to date. [258]

On 27 March Giuliani warned the mayor of L'Aquila there could be an earthquake within 24 hours, and an earthquake M~2.3 occurred. [259] On 29 March he made a second prediction. [260] He telephoned the mayor of the town of Sulmona, about 55 kilometers southeast of L'Aquila, to expect a "damaging" – or even "catastrophic" – earthquake within 6 to 24 hours. Loudspeaker vans were used to warn the inhabitants of Sulmona to evacuate, with consequential panic. No quake ensued and Giuliano was cited for inciting public alarm and enjoined from making future public predictions. [261]

After the L'Aquila event Giuliani claimed that he had found alarming rises in radon levels just hours before. [262] He said he had warned relatives, friends and colleagues on the evening before the earthquake hit. [263] He was subsequently interviewed by the International Commission on Earthquake Forecasting for Civil Protection, which found that Giuliani had not transmitted a valid prediction of the mainshock to the civil authorities before its occurrence. [264]

Difficulty or impossibility

As the preceding examples show, the record of earthquake prediction has been disappointing. [265] The optimism of the 1970s that routine prediction of earthquakes would be "soon", perhaps within ten years, [266] was coming up disappointingly short by the 1990s, [267] and many scientists began wondering why. By 1997 it was being positively stated that earthquakes can not be predicted, [148] which led to a notable debate in 1999 on whether prediction of individual earthquakes is a realistic scientific goal. [268]

Earthquake prediction may have failed only because it is "fiendishly difficult" [269] and still beyond the current competency of science. Despite the confident announcement four decades ago that seismology was "on the verge" of making reliable predictions, [52] there may yet be an underestimation of the difficulties. As early as 1978 it was reported that earthquake rupture might be complicated by "heterogeneous distribution of mechanical properties along the fault", [270] and in 1986 that geometrical irregularities in the fault surface "appear to exert major controls on the starting and stopping of ruptures". [271] Another study attributed significant differences in fault behavior to the maturity of the fault. [aj] These kinds of complexities are not reflected in current prediction methods. [273]

Seismology may even yet lack an adequate grasp of its most central concept, elastic rebound theory. A simulation that explored assumptions regarding the distribution of slip found results "not in agreement with the classical view of the elastic rebound theory". (This was attributed to details of fault heterogeneity not accounted for in the theory. [274] )

Earthquake prediction may be intrinsically impossible. In 1997, it has been argued that the Earth is in a state of self-organized criticality "where any small earthquake has some probability of cascading into a large event". [275] It has also been argued on decision-theoretic grounds that "prediction of major earthquakes is, in any practical sense, impossible." [276] In 2021, a multitude of authors from a variety of universities and research institutes studying the China Seismo-Electromagnetic Satellite reported [277] that the claims based on self-organized criticality stating that at any moment any small earthquake can eventually cascade to a large event, do not stand [278] in view of the results obtained to date by natural time analysis.

That earthquake prediction might be intrinsically impossible has been strongly disputed, [279] but the best disproof of impossibility – effective earthquake prediction – has yet to be demonstrated. [ak]

See also

Notes

  1. Kagan (1997b, §2.1) says: "This definition has several defects which contribute to confusion and difficulty in prediction research." In addition to specification of time, location, and magnitude, Allen suggested three other requirements: 4) indication of the author's confidence in the prediction, 5) the chance of an earthquake occurring anyway as a random event, and 6) publication in a form that gives failures the same visibility as successes. Kagan & Knopoff (1987, p. 1563) define prediction (in part) "to be a formal rule where by the available space-time-seismic moment manifold of earthquake occurrence is significantly contracted …"
  2. ICEF (2011, p. 327) distinguishes between predictions (as deterministic) and forecasts (as probabilistic).
  3. However, Mileti & Sorensen (1990) have argued that the extent of panic related to public disaster forecasts, and the 'cry wolf' problem with respect to repeated false alarms, have both been overestimated, and can be mitigated through appropriate communications from the authorities.
  4. The IASPEI Sub-Commission for Earthquake Prediction defined a precursor as "a quantitatively measurable change in an environmental parameter that occurs before mainshocks, and that is thought to be linked to the preparation process for this mainshock." [25]
  5. Subsequent diffusion of water back into the affected volume of rock is what leads to failure. [44]
  6. Giampaolo Giuiliani's claimed prediction of the L'Aquila earthquake was based on monitoring of radon levels.
  7. Over time the claim was modified. See 1983–1995: Greece (VAN) for more details.
  8. One enthusiastic supporter (Uyeda) was reported as saying "VAN is the biggest invention since the time of Archimedes". [71]
  9. A short overview of the debate can be found in an exchange of letters in the June 1998 issue of Physics Today. [75]
  10. For example the VAN "IOA" station was next to an antenna park, and the station at Pirgos, where most of the 1980s predictions were derived, was found to lie over the buried grounding grid of a military radio transmitter. VAN has not distinguished their "seismic electric signals" from artificial electromagnetic noise or from radio-telecommunication and industrial sources. [80]
  11. For example it has been shown that the VAN predictions are more likely to follow an earthquake than to precede one. It seems that where there have been recent shocks the VAN personnel are more likely to interpret the usual electrical variations as SES. The tendency for earthquakes to cluster then accounts for an increased chance of an earthquake in the rather broad prediction window. Other aspects of this will be discussed below.
  12. The literature on geophysical phenomena and ionospheric disturbances uses the term ULF (Ultra Low Frequency) to describe the frequency band below 10 Hz. The band referred to as ULF on the Radio wave page corresponds to a different part of the spectrum frequency formerly referred to as VF (Voice Frequency). In this article the term ULF is listed as ULF*.
  13. Evans (1997, §2.2) provides a description of the "self-organized criticality" (SOC) paradigm that is displacing the elastic rebound model.
  14. These include the type of rock and fault geometry.
  15. Of course these were not the only earthquakes in this period. The attentive reader will recall that, in seismically active areas, earthquakes of some magnitude happen fairly constantly. The "Parkfield earthquakes" are either the ones noted in the historical record, or were selected from the instrumental record on the basis of location and magnitude. Jackson & Kagan (2006, p. S399) and Kagan (1997, pp. 211–212, 213) argue that the selection parameters can bias the statistics, and that sequences of four or six quakes, with different recurrence intervals, are also plausible.
  16. Young faults are expected to have complex, irregular surfaces, which impede slippage. In time these rough spots are ground off, changing the mechanical characteristics of the fault. [138]
  17. Measurement of an uplift has been claimed, but that was 185 km away, and likely surveyed by inexperienced amateurs. [171]
  18. According to Wang et al. (2006, p. 762) foreshocks were widely understood to precede a large earthquake, "which may explain why various [local authorities] made their own evacuation decisions".
  19. The chairman of the NEPEC later complained to the Agency for International Development that one of its staff members had been instrumental in encouraging Brady and promulgating his prediction long after it had been scientifically discredited. [179]
  20. The most anticipated prediction ever is likely Iben Browning's 1990 New Madrid prediction, but it lacked any scientific basis.
  21. Near the small town of Parkfield, California, roughly halfway between San Francisco and Los Angeles.
  22. It has also been argued that the actual quake differed from the kind expected, [136] and that the prediction was no more significant than a simpler null hypothesis. [194]
  23. Varotsos & Lazaridou (1991) Table 2 (p. 340) and Table 3 (p. 341) includes nine predictions (unnumbered) from 27 April 1987 to 28 April 1988, with a tenth prediction issued on 26 February 1987 mentioned in a footnote. Two of these earthquakes were excluded from Table 3 on the grounds of having occurred in neighboring Albania. Table 1 (p. 333) includes 17 predictions (numbered) issued from 15 May 1988 to 23 July 1989. A footnote mentions a missed (unpredicted) earthquake on 19 March 1989; all 17 entries show associated earthquakes, and presumably are thereby deemed to have been successful predictions. Table 4 (p. 345) is a continuation of Table 1 (p. 346) out to 30 November 1989, adding five additional predictions with associated earthquakes.
  24. "MS(ATH)" is the MS magnitude reported by the National Observatory of Athens (SI-NOA), or VAN's estimate of what that magnitude would be. [199] These differ from the MS magnitudes reported by the USGS.
  25. Specifically, between 36° and 41° north latitude and 19° to 25° east longitude. [199]
  26. They have suggested the success rate should be higher, as one of the missed quakes would have been predicted but for attendance at a conference, and in another case a "clear SES" was recognized but a magnitude could not be determined for lack of operating stations.
  27. This pair of predictions was issued on 9/1/1988, and a similar pair of predictions was re-iterated on 9/30/1988, except that the predicted amplitudes were reduced to M(l)=5.0 and 5.3, respectively. In fact, an earthquake did occur approximately 240 km west of Athens, on 10/16/1988, with magnitude Ms(ATH)=6.0, which would correspond to a local magnitude M(l) of 5.5. [198]
  28. While some analyses have been done on the basis of a 100 km range (e.g., Hamada 1993, p. 205), Varotsos & Lazaridou (1991, p. 339) claim credit for earthquakes within a radius of 120 km.
  29. Geller (1996a, 6.4.2) notes that while Kobe was severely damaged by the 1995 Mw6.9 earthquake, damage in Osaka, only 30 km away, was relatively light.
  30. VAN predictions generally do not specify the magnitude scale or precision, but they have generally claimed a precision of ±0.7.
  31. As an instance of the quandary public officials face: in 1995 Professor Varotsos reportedly filed a complaint with the public prosecutor accusing government officials of negligence in not responding to his supposed prediction of an earthquake. A government official was quoted as saying "VAN's prediction was not of any use" in that it covered two-thirds of the area of Greece. [224]
  32. Spence et al. 1993 (USGS Circular 1083) is the most comprehensive, and most thorough, study of the Browning prediction, and appears to be the main source of most other reports. In the following notes, where an item is found in this document the pdf pagination is shown in brackets.
  33. A report on Browning's prediction cited over a dozen studies of possible tidal triggering of earthquakes, but concluded that "conclusive evidence of such a correlation has not been found". It also found that Browning's identification of a particular high tide as triggering a particular earthquake "difficult to justify". [241]
  34. Including "a 50/50 probability that the federal government of the U.S. will fall in 1992." [243]
  35. Previously involved in a psychic prediction of an earthquake for North Carolina in 1975, [247] Stewart sent a 13 page memo to a number of colleagues extolling Browning's supposed accomplishments, including predicting Loma Prieta. [248]
  36. More mature faults presumably slip more readily because they have been ground smoother and flatter. [272]
  37. "Despite over a century of scientific effort, the understanding of earthquake predictability remains immature. This lack of understanding is reflected in the inability to predict large earthquakes in the deterministic short-term sense." [280]

Related Research Articles

<span class="mw-page-title-main">Earthquake</span> Sudden movement of the Earths crust

An earthquake – also called a quake, tremor, or temblor – is the shaking of the Earth's surface resulting from a sudden release of energy in the lithosphere that creates seismic waves. Earthquakes can range in intensity, from those so weak they cannot be felt, to those violent enough to propel objects and people into the air, damage critical infrastructure, and wreak destruction across entire cities. The seismic activity of an area is the frequency, type, and size of earthquakes experienced over a particular time. The seismicity at a particular location in the Earth is the average rate of seismic energy release per unit volume.

<span class="mw-page-title-main">Seismology</span> Scientific study of earthquakes and propagation of elastic waves through a planet

Seismology is the scientific study of earthquakes and the generation and propagation of elastic waves through the Earth or other planetary bodies. It also includes studies of earthquake environmental effects such as tsunamis as well as diverse seismic sources such as volcanic, tectonic, glacial, fluvial, oceanic microseism, atmospheric, and artificial processes such as explosions and human activities. A related field that uses geology to infer information regarding past earthquakes is paleoseismology. A recording of Earth motion as a function of time, created by a seismograph is called a seismogram. A seismologist is a scientist works in basic or applied seismology.

The VAN method – named after P. Varotsos, K. Alexopoulos and K. Nomicos, authors of the 1981 papers describing it – measures low frequency electric signals, termed "seismic electric signals" (SES), by which Varotsos and several colleagues claimed to have successfully predicted earthquakes in Greece. Both the method itself and the manner by which successful predictions were claimed have been severely criticized. Supporters of VAN have responded to the criticism but the critics have not retracted their views.

Coulomb stress transfer is a seismic-related geological process of stress changes to surrounding material caused by local discrete deformation events. Using mapped displacements of the Earth's surface during earthquakes, the computed Coulomb stress changes suggest that the stress relieved during an earthquake not only dissipates but can also move up and down fault segments, concentrating and promoting subsequent tremors. Importantly, Coulomb stress changes have been applied to earthquake-forecasting models that have been used to assess potential hazards related to earthquake activity.

<span class="mw-page-title-main">Gutenberg–Richter law</span> Law in seismology describing earthquake frequency and magnitude

In seismology, the Gutenberg–Richter law expresses the relationship between the magnitude and total number of earthquakes in any given region and time period of at least that magnitude.

Earthquake forecasting is a branch of the science of seismology concerned with the probabilistic assessment of general earthquake seismic hazard, including the frequency and magnitude of damaging earthquakes in a given area over years or decades. While forecasting is usually considered to be a type of prediction, earthquake forecasting is often differentiated from earthquake prediction, Earthquake forecasting estimates the likelihood of earthquakes in a specific timeframe and region, while earthquake prediction attempts to pinpoint the exact time, location, and magnitude of an impending quake, which is currently not reliably achievable.Wood & Gutenberg (1935). Kagan says: "This definition has several defects which contribute to confusion and difficulty in prediction research." In addition to specification of time, location, and magnitude, Allen suggested three other requirements: 4) indication of the author's confidence in the prediction, 5) the chance of an earthquake occurring anyway as a random event, and 6) publication in a form that gives failures the same visibility as successes. Kagan & Knopoff define prediction "to be a formal rule where by the available space-time-seismic moment manifold of earthquake occurrence is significantly contracted ...."</ref> Both forecasting and prediction of earthquakes are distinguished from earthquake warning systems, which, upon detection of an earthquake, provide a real-time warning to regions that might be affected.

Episodic tremor and slip (ETS) is a seismological phenomenon observed in some subduction zones that is characterized by non-earthquake seismic rumbling, or tremor, and slow slip along the plate interface. Slow slip events are distinguished from earthquakes by their propagation speed and focus. In slow slip events, there is an apparent reversal of crustal motion, although the fault motion remains consistent with the direction of subduction. ETS events themselves are imperceptible to human beings and do not cause damage.

QuakeFinder is a company focused on developing a system for earthquake prediction. QuakeFinder operates as a project of aerospace engineering firm Stellar Solutions, and by subscriptions and sponsorships from the public.

Seismo-electromagnetics are various electro-magnetic phenomena believed to be generated by tectonic forces acting on the Earth's crust, and possibly associated with seismic activity such as earthquakes and volcanoes. Study of these has been prompted by the prospect they might be generated by the increased stress leading up to an earthquake, and might thereby provide a basis for short-term earthquake prediction. However, despite many studies, no form of seismo-electromagnetics has been shown to be effective for earthquake prediction. A key problem is that earthquakes themselves produce relatively weak electromagnetic phenomena, and the effects from any precursory phenomena are likely to be too weak to measure. Close monitoring of the Parkfield earthquake revealed no significant pre-seismic electromagnetic effects. However, some researchers remain optimistic, and searches for seismo-electromagnetic earthquake precursors continue.

In seismology, doublet earthquakes – and more generally, multiplet earthquakes – were originally identified as multiple earthquakes with nearly identical waveforms originating from the same location. They are now characterized as distinct earthquake sequences having two main shocks of similar magnitude, sometimes occurring within tens of seconds, but sometimes separated by years. The similarity of magnitude – often within 0.4 magnitude – distinguishes multiplet events from aftershocks, which start at about 1.2 magnitude less than the parent shock and decrease in magnitude and frequency according to known laws.

<span class="mw-page-title-main">UCERF2</span>

The 2008 Uniform California Earthquake Rupture Forecast, Version 2, or UCERF2, is one of a series of earthquake forecasts prepared for the state California by the Working Group on California Earthquake Probabilities (WGCEP), collaboration of the U.S. Geological Survey, the California Geological Survey, and the Southern California Earthquake Center, with funding from the California Earthquake Authority. UCERF2 was superseded by UCERF3 in 2015.

James H. Dieterich is an American geophysics professor emeritus at University of California, Riverside (UCR).

Earthquake sensitivity and earthquake sensitive are pseudoscientific terms defined by Jim Berkland to refer to certain people who claim sensitivity to the precursors of impending earthquakes, manifested in "dreams or visions, psychic impressions, or physiological symptoms", the latter including "ear tones", headaches, and agitation. It is claimed that "[a] person with a very sensitive body may also have some subtle reaction to whatever animals react to". Proponents have speculated that these may result from: 1) piezoelectric effects due to changes in the stress of the Earth's crust, 2) low-frequency electromagnetic signals, or 3) from the emission of radon gas.

<span class="mw-page-title-main">Jeanne Hardebeck</span> American seismologist

Jeanne L. Hardebeck is an American research geophysicist studying earthquakes and seismology who has worked at the United States Geological Survey (USGS) since 2004. Hardebeck studies the state of stress and the strength of faults.

<span class="mw-page-title-main">Emily Brodsky</span> Geophysicist

Emily E. Brodsky is a Professor of Earth Sciences at the University of California, Santa Cruz. She studies the fundamental physical properties of earthquakes, as well as the seismology of volcanoes and landslides. In 2023, she was elected to the National Academy of Sciences.

Natural time analysis is a statistical method applied to analyze complex time series and critical phenomena, based on event counts as a measure of "time" rather than the clock time. Natural time concept was introduced by P. Varotsos, N. Sarlis and E. Skordas in 2001. Natural time analysis has been primarily applied to earthquake prediction / nowcasting and secondarily to sudden cardiac death / heart failure and financial markets. Natural time characteristics are considered to be unique.

Donna Eberhart-Phillips is a geologist known for her research on subduction zones, especially in Alaska and New Zealand.

Ruth Harris is a scientist at the United States Geological Survey known for her research on large earthquakes, especially on how they begin, end, and cause the ground to shake. In 2019, Harris was elected a fellow of the American Geophysical Union who cited her "for outstanding contributions to earthquake rupture dynamics, stress transfer, and triggering".

<span class="mw-page-title-main">Earthquake cycle</span> Natural phenomenon

The earthquake cycle refers to the phenomenon that earthquakes repeatedly occur on the same fault as the result of continual stress accumulation and periodic stress release. Earthquake cycles can occur on a variety of faults including subduction zones and continental faults. Depending on the size of the earthquake, an earthquake cycle can last decades, centuries, or longer. The Parkfield portion of the San Andreas fault is a well-known example where similarly located M6.0 earthquakes have been instrumentally recorded every 30–40 years.

<span class="mw-page-title-main">1995 Kozani–Grevena earthquake</span> Large earthquake in Greece

The 1995 Kozani–Grevena earthquake was a large earthquake that occurred on May 13, 1995, in the region of Western Macedonia, Greece. With a magnitude of 6.6 on the moment magnitude scale, this earthquake caused locally significant damage to villages and towns in the regions of Kozani and Grevena. 25 people were injured and monetary damages of $450 million were caused as a result of the earthquake.

References

  1. Geller et al. 1997 , p. 1616, following Allen 1976 , p. 2070, who in turn followed Wood & Gutenberg 1935.
  2. Kagan 1997b , p. 507.
  3. Kanamori 2003 , p. 1205.
  4. Geller et al. 1997 , p. 1617; Geller 1997 , p. 427, §2.3; Console 2001 , p. 261.
  5. ICEF 2011 , p. 328; Jackson 2004 , p. 344.
  6. Wang et al. 2006.
  7. Geller 1997 , Summary.
  8. Kagan 1997b; Geller 1997; Main 1999.
  9. Mulargia & Gasperini 1992 , p. 32; Luen & Stark 2008 , p. 302.
  10. Luen & Stark 2008; Console 2001.
  11. Jackson 1996a , p. 3775.
  12. Jones 1985 , p. 1669.
  13. Console 2001 , p. 1261.
  14. Luen & Stark 2008. This was based on data from Southern California.
  15. Wade 1977.
  16. Hall 2011; Cartlidge 2011. Additional details in Cartlidge 2012.
  17. Geller 1997 , p. 437, §5.2.
  18. Atwood & Major 1998.
  19. Saegusa 1999.
  20. Mason 2003 , p. 48 and throughout.
  21. Stiros 1997.
  22. Stiros 1997 , p. 483.
  23. Panel on Earthquake Prediction 1976 , p. 9.
  24. Uyeda, Nagao & Kamogawa 2009 , p. 205; Hayakawa 2015.
  25. Geller 1997, §3.1.
  26. Geller 1997 , p. 429, §3.
  27. E.g., Claudius Aelianus, in De natura animalium, book 11, commenting on the destruction of Helike in 373 BC, but writing five centuries later.
  28. Rikitake 1979 , p. 294. Cicerone, Ebel & Britton 2009 has a more recent compilation
  29. Jackson 2004 , p. 335.
  30. Geller 1997 , p. 425. See also: Jackson 2004 , p. 348: "The search for precursors has a checkered history, with no convincing successes." Zechar & Jordan 2008 , p. 723: "The consistent failure to find reliable earthquake precursors...". ICEF 2009: "... no convincing evidence of diagnostic precursors."
  31. Wyss & Booth 1997 , p. 424.
  32. ICEF 2011 , p. 338.
  33. ICEF 2011 , p. 361.
  34. Bolt 1993 , pp. 30–32.
  35. 1 2 3 "Animals & Earthquake Prediction | U.S. Geological Survey". United States Geological Survey.
  36. ICEF 2011 , p. 336; Lott, Hart & Howell 1981 , p. 1204.
  37. 1 2 https://pubs.geoscienceworld.org/ssa/bssa/article-abstract/108/3A/1031/530275/Review-Can-Animals-Predict-Earthquakes-Review-Can?redirectedFrom=fulltext.{{cite web}}: Missing or empty |title= (help)
  38. 1 2 3 4 "Can Animals Predict Earthquakes? | Seismological Society of America". Seismological Society of America.
  39. Lott, Hart & Howell 1981.
  40. Brown & Kulik 1977.
  41. Freund & Stolc 2013.
  42. 1 2 Main et al. 2012 , p. 215.
  43. Main et al. 2012 , p. 217.
  44. Main et al. 2012, p. 215; Hammond 1973.
  45. 1 2 Hammond 1974.
  46. Scholz, Sykes & Aggarwal 1973, quoted by Hammond 1973.
  47. ICEF 2011 , pp. 333–334; McEvilly & Johnson 1974; Lindh, Lockner & Lee 1978.
  48. Main et al. 2012 , p. 226.
  49. Main et al. 2012 , pp. 220–221, 226; see also Lindh, Lockner & Lee 1978.
  50. 1 2 Hough 2010b.
  51. Hammond 1973. Additional references in Geller 1997 , §2.4.
  52. 1 2 Scholz, Sykes & Aggarwal 1973.
  53. Aggarwal et al. 1975.
  54. Hough 2010b , p. 110.
  55. Allen 1983 , p. 79; Whitcomb 1977.
  56. McEvilly & Johnson 1974.
  57. Lindh, Lockner & Lee 1978.
  58. ICEF 2011 , p. 333.
  59. Cicerone, Ebel & Britton 2009 , p. 382.
  60. ICEF 2011 , p. 334; Hough 2010b , pp. 93–95.
  61. Johnston 2002 , p. 621.
  62. Park 1996 , p. 493.
  63. See Geller 1996a and Geller 1996b for some history of these hopes.
  64. ICEF 2011 , p. 335.
  65. Park, Dalrymple & Larsen 2007 , paragraphs 1 and 32. See also Johnston et al. 2006 , p. S218 "no VAN-type SES observed" and Kappler, Morrison & Egbert 2010 "no effects found that can be reasonably characterized as precursors".
  66. ICEF 2011 , p. 335, Summary.
  67. Varotsos, Alexopoulos & Nomicos 1981, described by Mulargia & Gasperini 1992 , p. 32, and Kagan 1997b , p. 512, §3.3.1.
  68. Varotsos & Alexopoulos 1984b , p. 100.
  69. Varotsos & Alexopoulos 1984b , p. 120. Italicization from the original.
  70. Varotsos & Alexopoulos 1984b , p. 117, Table 3; Varotsos et al. 1986; Varotsos & Lazaridou 1991 , p. 341, Table 3; Varotsos et al. 1996a , p. 55, Table 3. These are examined in more detail in 1983–1995: Greece (VAN).
  71. Chouliaras & Stavrakakis 1999, p. 223.
  72. Mulargia & Gasperini 1992 , p. 32.
  73. Geller 1996b; "Table of contents". Geophysical Research Letters. 23 (11). 27 May 1996. doi:10.1002/grl.v23.11.
  74. The proceedings were published as A Critical Review of VAN( Lighthill 1996 ). See Jackson & Kagan (1998) for a summary critique.
  75. Geller et al. 1998; Anagnostopoulos 1998.
  76. Mulargia & Gasperini 1996a , p. 1324; Jackson 1996b , p. 1365; Jackson & Kagan 1998; Stiros 1997 , p. 478.
  77. Drakopoulos, Stavrakakis & Latoussakis 1993 , pp. 223, 236; Stavrakakis & Drakopoulos 1996; Wyss 1996 , p. 1301.
  78. Jackson 1996b , p. 1365; Gruszow et al. 1996 , p. 2027.
  79. Gruszow et al. 1996 , p. 2025.
  80. Chouliaras & Stavrakakis 1999; Pham et al. 1998, pp. 2025, 2028; Pham et al. 1999.
  81. Pham et al. 2002.
  82. Varotsos, Sarlis & Skordas 2003a
  83. Varotsos, Sarlis & Skordas 2003b
  84. Stiros 1997 , p. 481.
  85. 1 2 ICEF 2011 , pp. 335–336.
  86. Hough 2010b , p. 195.
  87. 1 2 Uyeda, Nagao & Kamogawa 2011
  88. Varotsos, Sarlis & Skordas 2002;[ full citation needed ] Varotsos 2006.[ full citation needed ]; Rundle et al. 2012.
  89. Huang 2015.
  90. 1 2 Helman 2020
  91. Sarlis et al. 2020
  92. Hough 2010 , pp. 131–133; Thomas, Love & Johnston 2009.
  93. Fraser-Smith et al. 1990 , p. 1467 called it "encouraging".
  94. Campbell 2009.
  95. Thomas, Love & Johnston 2009.
  96. Freund 2000.
  97. Hough 2010b , pp. 133–135.
  98. Heraud, Centa & Bleier 2015.
  99. Enriquez 2015.
  100. Hough 2010b , pp. 137–139.
  101. Freund, Takeuchi & Lau 2006.
  102. Freund & Sornette 2007.
  103. Freund et al. 2009.
  104. Eftaxias et al. 2009.
  105. Eftaxias et al. 2010.
  106. Tsolis & Xenos 2010.
  107. Rozhnoi et al. 2009.
  108. Biagi et al. 2011.
  109. Politis, Potirakis & Hayakawa 2020
  110. Thomas, JN; Huard, J; Masci, F (2017). "Thomas, J. N., Huard, J., & Masci, F. (2017). A statistical study of global ionospheric map total electron content changes prior to occurrences of M≥ 6.0 earthquakes during 2000–2014". Journal of Geophysical Research: Space Physics. 122 (2): 2151–2161. doi: 10.1002/2016JA023652 . S2CID   132455032.
  111. Andrén, Margareta; Stockmann, Gabrielle; Skelton, Alasdair (2016). "Coupling between mineral reactions, chemical changes in groundwater, and earthquakes in Iceland". Journal of Geophysical Research: Solid Earth. 121 (4): 2315–2337. Bibcode:2016JGRB..121.2315A. doi: 10.1002/2015JB012614 . S2CID   131535687.
  112. Filizzola et al. 2004.
  113. Lisi et al. 2010.
  114. Pergola et al. 2010.
  115. Genzano et al. 2009.
  116. Freund 2010.
  117. 1 2 Rundle et al. 2016
  118. Rundle et al. 2019
  119. Varotsos, Sarlis & Skordas 2001
  120. 1 2 Rundle et al. 2018b
  121. 1 2 Luginbuhl, Rundle & Turcotte 2019
  122. Pasari 2019
  123. Rundle et al. 2020
  124. Luginbuhl et al. 2018
  125. Luginbuhl, Rundle & Turcotte 2018b
  126. Luginbuhl, Rundle & Turcotte 2018a
  127. Reid 1910 , p. 22; ICEF 2011 , p. 329.
  128. Wells & Coppersmith 1994 , p. 993, Fig. 11.
  129. Zoback 2006 provides a clear explanation.
  130. Castellaro 2003.
  131. Schwartz & Coppersmith 1984; Tiampo & Shcherbakov 2012 , p. 93, §2.2.
  132. Field et al. 2008.
  133. Bakun & Lindh 1985 , p. 619.
  134. Bakun & Lindh 1985 , p. 621.
  135. Jackson & Kagan 2006 , p. S408 say the claim of quasi-periodicity is "baseless".
  136. 1 2 Jackson & Kagan 2006.
  137. Kagan & Jackson 1991 , pp. 21, 420; Stein, Friedrich & Newman 2005; Jackson & Kagan 2006; Tiampo & Shcherbakov 2012 , §2.2, and references there; Kagan, Jackson & Geller 2012; Main 1999.
  138. Cowan, Nicol & Tonkin 1996; Stein & Newman 2004, p. 185.
  139. Stein & Newman 2004.
  140. Scholz 2002 , p. 284, §5.3.3; Kagan & Jackson 1991 , pp. 21, 419; Jackson & Kagan 2006 , p. S404.
  141. Kagan & Jackson 1991 , pp. 21, 419; McCann et al. 1979; Rong, Jackson & Kagan 2003.
  142. Lomnitz & Nava 1983.
  143. Rong, Jackson & Kagan 2003 , p. 23.
  144. Kagan & Jackson 1991 , Summary.
  145. See details in Tiampo & Shcherbakov 2012 , §2.4.
  146. 1 2 3 CEPEC 2004a.
  147. Kossobokov et al. 1999.
  148. 1 2 Geller et al. 1997.
  149. Hough 2010b , pp. 142–149.
  150. Zechar 2008; Hough 2010b , p. 145.
  151. Zechar 2008 , p. 7. See also p. 26.
  152. Tiampo & Shcherbakov 2012 , §2.1. Hough 2010b , chapter 12, provides a good description.
  153. Hardebeck, Felzer & Michael 2008 , par. 6.
  154. Hough 2010b , pp. 154–155.
  155. Tiampo & Shcherbakov 2012 , p. 93, §2.1.
  156. Hardebeck, Felzer & Michael 2008 , §4 show how suitable selection of parameters shows "DMR": Decelerating Moment Release.
  157. Hardebeck, Felzer & Michael 2008 , par. 1, 73.
  158. Mignan 2011 , Abstract.
  159. Rouet-Leduc et al. 2017.
  160. Smart, Ashley (19 September 2019). "Artificial Intelligence Takes on Earthquake Prediction". Quanta Magazine. Retrieved 28 March 2020.
  161. DeVries et al. 2018.
  162. Mignan & Broccardo 2019.
  163. Tarasov & Tarasova 2009
  164. Novikov et al. 2017
  165. Zeigarnik et al. 2007
  166. Geller 1997 , §4.
  167. E.g.: Davies 1975; Whitham et al. 1976 , p. 265; Hammond 1976; Ward 1978; Kerr 1979 , p. 543; Allen 1982 , p. S332; Rikitake 1982; Zoback 1983; Ludwin 2001; Jackson 2004 , pp. 335, 344; ICEF 2011 , p. 328.
  168. Whitham et al. (1976 , p. 266) provide a brief report. Raleigh et al. (1977) has a fuller account. Wang et al. (2006 , p. 779), after careful examination of the records, set the death toll at 2,041.
  169. Raleigh et al. 1977 , p. 266, quoted in Geller (1997 , p. 434). Geller has a whole section (§4.1) of discussion and many sources. See also Kanamori 2003 , pp. 1210–11.
  170. Quoted in Geller (1997 , p. 434). Lomnitz (1994 , Ch. 2) describes some of circumstances attending to the practice of seismology at that time; Turner 1993 , pp. 456–458 has additional observations.
  171. Jackson 2004, p. 345.
  172. Kanamori 2003 , p. 1211.
  173. 1 2 Wang et al. 2006 , p. 785.
  174. 1 2 Roberts 1983 , p. 151, §4.
  175. Hough 2010 , p. 114.
  176. Gersony 1982 , p. 231.
  177. Gersony 1982 , p. 247, document 85.
  178. Gersony 1982 , p. 248, document 86; Roberts 1983 , p. 151.
  179. Gersony 1982, p. 201, document 146.
  180. Gersony 1982 , p. 343, document 116; Roberts 1983 , p. 152.
  181. John Filson, deputy chief of the USGS Office of Earthquake Studies, quoted by Hough (2010 , p. 116).
  182. Gersony 1982 , p. 422, document 147, U.S. State Dept. cablegram.
  183. Hough 2010 , p. 117.
  184. Gersony 1982 , p. 416; Kerr 1981.
  185. Giesecke 1983 , p. 68.
  186. Geller (1997 , §6) describes some of the coverage.
  187. Bakun & McEvilly 1979; Bakun & Lindh 1985; Kerr 1984.
  188. Bakun et al. 1987.
  189. Kerr 1984, "How to Catch an Earthquake"; Roeloffs & Langbein 1994.
  190. Roeloffs & Langbein 1994 , p. 316.
  191. Quoted by Geller 1997 , p. 440.
  192. Kerr 2004; Bakun et al. 2005, Harris & Arrowsmith 2006 , p. S5.
  193. Hough 2010b , p. 52.
  194. Kagan 1997.
  195. Varotsos & Alexopoulos 1984b , p. 117, Table 3.
  196. Varotsos et al. 1996a , Table 1.
  197. Jackson & Kagan 1998.
  198. 1 2 Varotsos et al. 1996a , p. 55, Table 3.
  199. 1 2 Varotsos et al. 1996a, p. 49.
  200. Varotsos et al. 1996a , p. 56.
  201. Jackson 1996b , p. 1365; Mulargia & Gasperini 1996a , p. 1324.
  202. Geller 1997 , p. 436, §4.5: "VAN's 'predictions' never specify the windows, and never state an unambiguous expiration date. Thus VAN are not making earthquake predictions in the first place."
  203. Jackson 1996b , p. 1363. Also: Rhoades & Evison (1996 , p. 1373): No one "can confidently state, except in the most general terms, what the VAN hypothesis is, because the authors of it have nowhere presented a thorough formulation of it."
  204. 1 2 Kagan & Jackson 1996 , p. 1434.
  205. Geller 1997 , p. 436, Table 1.
  206. Mulargia & Gasperini 1992 , p. 37.
  207. Hamada 1993 10 successful predictions out of 12 issued (defining success as those that occurred within 22 days of the prediction, within 100 km of the predicted epicenter and with a magnitude difference (predicted minus true) not greater than 0.7.)
  208. Shnirman, Schreider & Dmitrieva 1993; Nishizawa et al. 1993[ full citation needed ] and Uyeda 1991[ full citation needed ]
  209. Lighthill 1996.
  210. "Table of contents". Geophysical Research Letters. 23 (11). 27 May 1996. doi:10.1002/grl.v23.11.; Aceves, Park & Strauss 1996.
  211. Varotsos & Lazaridou 1996b; Varotsos, Eftaxias & Lazaridou 1996.
  212. Varotsos et al. 2013
  213. Christopoulos, Skordas & Sarlis 2020
  214. Donges et al. 2016
  215. Mulargia & Gasperini 1992 , p. 32; Geller 1996a , p. 184 ("ranges not given, or vague"); Mulargia & Gasperini 1992 , p. 32 ("large indetermination in the parameters"); Rhoades & Evison 1996 , p. 1372 ("falls short"); Jackson 1996b , p. 1364 ("have never been fully specified"); Jackson & Kagan 1998 , p. 573 ("much too vague"); Wyss & Allmann 1996 , p. 1307 ("parameters not defined"). Stavrakakis & Drakopoulos (1996) discuss some specific cases in detail.
  216. Geller 1997 , p. 436. Geller (1996a , pp. 183–189, §6) discusses this at length.
  217. Telegram 39, issued 1 September 1988, in Varotsos & Lazaridou 1991 , p. 337, Fig. 21. See figure 26 (p. 344) for a similar telegram. See also telegrams 32 and 41 (figures 15 and 16, pp. 115-116) in Varotsos & Alexopoulos 1984b. This same pair of predictions is apparently presented as Telegram 10 in Table 1, p. 50, of Varotsos et al. 1996a. Text from several telegrams is presented in Table 2 (p. 54), and faxes of a similar character.
  218. Varotsos et al. (1996a) they also cite Hamada's claim of a 99.8% confidence level. Geller (1996a , p. 214) finds that this "was based on the premise that 6 out of 12 telegrams" were in fact successful predictions, which is questioned. Kagan (1996 , p. 1315) finds that in Shnirman et al. "several variables ... have been modified to achieve the result." Geller et al. (1998 , p. 98) mention other "flaws such as overly generous crediting of successes, using strawman null hypotheses and failing to account for properly for a posteriori "tuning" of parameters."
  219. Kagan 1996 , p. 1318.
  220. GR Reporter (2011) "From its very appearance in the early 1990s until today, the VAN group is the subject of sharp criticism from Greek seismologists"; Chouliaras & Stavrakakis (1999): "panic overtook the general population" (Prigos, 1993). Ohshansky & Geller (2003 , p.  318 ): "causing widespread unrest and a sharp increase in tranquilizer drugs" (Athens, 1999). Papadopoulos (2010): "great social uneasiness" (Patras, 2008). Anagnostopoulos (1998 , p. 96): "often caused widespread rumors, confusion and anxiety in Greece". ICEF (2011 , p. 352): issuance over the years of "hundreds" of statements "causing considerable concern among the Greek population."
  221. Stiros 1997 , p. 482.
  222. Varotsos et al. 1996a , pp. 36, 60, 72.
  223. Anagnostopoulos 1998.
  224. Geller 1996a, p. 223.
  225. Apostolidis 2008; Uyeda & Kamogawa 2008; Chouliaras 2009; Uyeda 2010.[ full citation needed ]
  226. Papadopoulos 2010.
  227. Uyeda & Kamogawa 2010
  228. Harris 1998 , p. B18.
  229. Garwin 1989.
  230. USGS staff 1990 , p. 247.
  231. Kerr 1989; Harris 1998.
  232. e.g., ICEF 2011 , p. 327.
  233. Harris 1998 , p. B22.
  234. Harris 1998 , p. B5, Table 1.
  235. Harris 1998 , pp. B10–B11.
  236. Harris 1998 , p. B10, and figure 4, p. B12.
  237. Harris 1998 , p. B11, figure 5.
  238. Geller (1997 , §4.4) cites several authors to say "it seems unreasonable to cite the 1989 Loma Prieta earthquake as having fulfilled forecasts of a right-lateral strike-slip earthquake on the San Andreas Fault."
  239. Harris 1998 , pp. B21–B22.
  240. Hough 2010b , p. 143.
  241. AHWG 1990, p. 10 (Spence et al. 1993, p. 54 [62]).
  242. Spence et al. 1993 , footnote, p. 4 [12] "Browning preferred the term projection, which he defined as determining the time of a future event based on calculation. He considered 'prediction' to be akin to tea-leaf reading or other forms of psychic foretelling." See also Browning's own comment on p. 36 [44].
  243. Spence et al. 1993, p. 39 [47].
  244. Spence et al. 1993 , pp. 9–11 [17–19], and see various documents in Appendix A, including The Browning Newsletter for 21 November 1989 (p. 26 [34]).
  245. AHWG 1990 , p. III( Spence et al. 1993 , p. 47 [55]).
  246. AHWG 1990 , p. 30( Spence et al. 1993 , p. 64 [72]).
  247. Spence et al. 1993, p. 13 [21]
  248. Spence et al. 1993, p. 29 [37].
  249. Spence et al. 1993 , throughout.
  250. Tierney 1993 , p. 11.
  251. Spence et al. 1993 , pp. 4 [12], 40 [48].
  252. CEPEC 2004a; Hough 2010b , pp. 145–146.
  253. CEPEC 2004b.
  254. ICEF 2011 , p. 320.
  255. Alexander 2010 , p. 326.
  256. Squires & Rayne 2009; McIntyre 2009.
  257. Hall 2011 , p. 267.
  258. Kerr 2009.
  259. Dollar 2010.
  260. ICEF (2011 , p. 323) alludes to predictions made on 17 February and 10 March.
  261. Kerr 2009; Hall 2011 , p. 267; Alexander 2010 , p. 330.
  262. Kerr 2009; Squires & Rayne 2009.
  263. Dollar 2010; Kerr 2009.
  264. ICEF 2011 , pp. 323, 335.
  265. Geller 1997 found "no obvious successes".
  266. Panel on Earthquake Prediction 1976 , p. 2.
  267. Kagan 1997b , p. 505 "The results of efforts to develop earthquake prediction methods over the last 30 years have been disappointing: after many monographs and conferences and thousands of papers we are no closer to a working forecast than we were in the 1960s".
  268. Main 1999.
  269. Geller et al. 1997 , p. 1617.
  270. Kanamori & Stewart 1978 , abstract.
  271. Sibson 1986.
  272. Cowan, Nicol & Tonkin 1996.
  273. Schwartz & Coppersmith (1984 , pp. 5696–7) argued that the characteristics of fault rupture on a given fault "can be considered essentially constant through several seismic cycles". The expectation of a regular rate of occurrence that accounts for all other factors was rather disappointed by the lateness of the Parkfield earthquake.
  274. Ziv, Cochard & Schmittbuhl 2007.
  275. Geller et al. 1997 , p. 1616; Kagan 1997b , p. 517. See also Kagan 1997b , p. 520, Vidale 1996 and especially Geller 1997 , §9.1, "Chaos, SOC, and predictability".
  276. Matthews 1997.
  277. Martucci et al. 2021
  278. Varotsos, Sarlis & Skordas 2020
  279. E.g., Sykes, Shaw & Scholz 1999 and Evison 1999.
  280. ICEF 2011, p. 360.

Sources

Addition reading