This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
Didier Sornette | |
---|---|
Born | Paris, France | June 25, 1957
Nationality | French |
Alma mater | Ecole Normale Supérieure, (1977–1981) University of Nice (1980–1985) |
Known for | Prediction of crises and extreme events in complex systems, physical modeling of earthquakes, physics of complex systems and pattern formation in spatio-temporal structures |
Awards | Science et Défense French National Award, 2000 Research McDonnell award, Risques-Les Echos prize 2002 for Predictability of catastrophic events |
Scientific career | |
Fields | Physics, geophysics, complex systems, economics, finance |
Institutions | Swiss Federal Institute of Technology Zurich, Swiss Finance Institute, UCLA, CNRS |
Didier Sornette (born June 25, 1957, in Paris) is a French researcher studying subjects including complex systems and risk management. He is Professor on the Chair of Entrepreneurial Risks at the Swiss Federal Institute of Technology Zurich (ETH Zurich) and is also a professor of the Swiss Finance Institute, He was previously a Professor of Geophysics at UCLA, Los Angeles California (1996–2006) and a Research Professor at the French National Centre for Scientific Research (1981–2006).
With his long-time collaborator Dr. Guy Ouillon, Sornette has been leading a research group on the “Physics of earthquakes” over the last 25 years. The group is active in the modelling of earthquakes, landslides, and other natural hazards, combining concepts and tools from statistical physics, statistics, tectonics, seismology and more. First located at the Laboratory of Condensed Matter Physics (University of Nice, France), then at the Earth and Space Department (UCLA, USA), the group is now at ETH-Zurich (Switzerland) since March 2006.
The group has tackled the problem of earthquake and rupture prediction since the mid-90s within the broader physical concept of critical phenomena. [1] Considering rupture as a second-order phase transition, this predicts that, approaching rupture, the spatial correlation length of stress and damage increases. [2] This in turn leads to a power-law acceleration of moment and strain release, up to the macroscopic failure time of the sample (i.e. a large earthquake in nature). This prediction has been checked on various natural and industrial/laboratory data, over a wide spectrum of different scales (laboratory samples, mines, California earthquakes catalog), and under different loading conditions of the system (constant stress rate, constant strain rate). The most puzzling observation is that the critical power-law rate acceleration is decorated by log-periodic oscillations, suggesting a universal ratio close to 2.2. The existence of such oscillations stems from interactions between seismogenic structures (see below for the case of faults and fractures), but also offers a better constraint to identify areas within which a large event may occur. The concept of critical piezo-electricity in polycrystals [3] [4] [5] has been applied to the Earth's crust. [6]
Earthquake forecasting differs from prediction in the sense that no alarm is issued, but a time-dependent probability of earthquake occurrence is estimated. Sornette's group has contributed significantly to the theoretical development and study of the properties of the now standard Epidemic Type Aftershock Sequence (ETAS) model. [7] In a nutshell, this model states that each event triggers its own direct aftershocks, which themselves trigger their own aftershocks, and so on... The consequence is that events cannot be labeled anymore as foreshocks, mainshocks or aftershocks, as they can be all of that at the same time (with different levels of probability). In this model, the probability for an event to trigger another one primarily depends on their separating space and time distances, as well as on the magnitude of the triggering event, so that seismicity is then governed by a set of seven parameters. Sornette's group is currently pushing the model to its limits by allowing space and time variations of its parameters. [8] Despite the fact that this new model reaches better forecasting scores than any other competing model, it is not sufficient to achieve systematic reliable predictions. The main reason is that this model predicts future seismicity rates quite accurately, but fails to put constraints on the magnitudes (which are assumed to be distributed according to the Gutenberg-Richter law, and to be independent of each other). Some other seismic or non-seismic precursors are thus required in order to further improve those forecasts. According to the ETAS model, the rate of triggered activity around a given event behaves isotropically. This over-simplified assumption has recently relaxed by coupling the statistics of ETAS to genuine mechanical information. This is done by modelling the stress perturbation due to a given event on its surroundings, and correlating it with the space-time rate of subsequent activity as a function of transferred stress amplitude and sign. This suggests that triggering of aftershocks stems from a combination of dynamic (seismic waves) and elasto-static processes. Another unambiguous interesting result of this work is that the Earth crust in Southern California has quite a short memory of past stress fluctuations lasting only about 3 to 4 months. [9] This may put more constraint on the time window within which one may look for both seismic and non-seismic precursors.
Ouillon and Sornette have developed a pure statistical physics model of earthquake interaction and triggering, aiming at giving more flesh to the purely empirical ETAS linear model. The basic assumption of this "Multifractal stress activated" model [10] [11] is that, at any place and time, the local failure rate depends exponentially on the applied stress. The second key ingredient is to recognize that, In the Earth crust, the local stress field is the sum of the large scale, far-field stress due to plate motion, plus all stress fluctuations due to past earthquakes. As elastic stresses add up, the exponentiation thus makes this model nonlinear. Solving it analytically allowed them to predict that each event triggers some aftershocks with a rate decaying in time according to the Omori law, i.e. as 1/tp, but with a special twist that had not been recognized heretofore. The unique prediction of the MSA model is that the exponent p is not constant (close to 1) but increases linearly with the magnitude of the mainshock. Statistical analyses of various catalogs (California, Japan, Taiwan, Harvard CMT) have been carried out to test this prediction, which confirmed it using different statistical techniques (stacks to improve signal to noise ratio, specifically devised wavelets for a multiscale analysis, extreme magnitude distributions, etc.). [12] [13] This result thus shows that small events may trigger a smaller number of aftershocks than large ones, but that their cumulative effect may be more long-lasting in the Earth crust. A new technique has also recently introduced, called the barycentric fixed mass method, to improve considerably the estimation of multifractal structures of spatio-temporal seismicity expected from the MSA model. [14]
A significant part of the activity of Sornette's group has also been devoted to the statistical physics modelling as well as properties of fractures and faults at different scales. Those features are important as they may control various transport properties of the crust as well as represent the loci of earthquake nucleation.
Sornette and Sornette (1989) [15] suggested to view earthquakes and global plate tectonics as self-organized critical phenomena. As fault networks are clearly self-organized critical systems in the sense that earthquakes occur on faults, and faults grow because of earthquakes, [16] [17] [18] resulting in hierarchical properties, the study of their statistics should also bring information about the seismic process itself. [19] Davy, Sornette and Sornette [20] [21] [16] [22] introduced a model of growth pattern formation of faulting and showed that the existence of un-faulted areas is the natural consequence of the fractal organization of faulting. Cowie et al. (1993; 1995) [23] [24] developed the first theoretical model that encompasses both the long range and time organization of complex fractal fault patterns and the short time dynamics of earthquake sequences. A result is the generic existence in the model of fault competition with intermittent activity of different faults. The geometrical and dynamical complexity of faults and earthquakes is shown to result from the interplay between spatio-temporal chaos and an initial featureless quenched heterogeneity. Miltenberger et al. [25] and Sornette et al. (1994) [26] showed that self-organized criticality in earthquakes and tectonic deformations are related to synchronization of threshold relaxation oscillators. Lee et al. (1999) [27] demonstrated the intrinsic intermittent nature of seismic activity on faults, which results from their competition to accommodate the tectonic deformation. Sornette and Pisarenko (2003) performed a rigorous statistical analysis of distribution of plate sizes participating in plate tectonics and demonstrate the fractal nature of plate tectonics. [28]
Using a collection of maps centered at the same location but at different scales in Saudi Arabia (meter to hundreds of kilometers, i.e. slightly more than five decades), it was shown that joints and fault patterns display distinct spatial scaling properties within distinct ranges of scales. [29] [30] [31] These transition scales (which quantify the horizontal distribution of brittle structures) can be nicely correlated with the vertical mechanical layering of the host medium (the Earth crust). In particular, fracture patterns can be shown to be rather uniform at scales lower than the thickness of the sedimentary basin, and become heterogeneous and multifractal at larger scales. Those different regimes have been discovered by designing new multifractal analysis techniques (able to take account of the small size of the datasets as well as with irregular geometrical boundary conditions), as well as by introducing a new technique based on 2D anisotropic wavelet analysis. By mapping some joints within the crystalline basement in the same area, it was found that their spatial organization (spacing distribution) displayed discrete scale invariance over more than four decades. [32] Using some other dataset and a theoretical model, Huang et al. also showed that, due to interactions between parallel structures, the length distribution of joints displays discrete scale invariance. [33]
Motivated by earthquake prediction and forecast, Sornette' group has also contributed to the problem of 3D fault mapping. Given an earthquake catalog with a large number of events, the main idea is to invert for the set of planar segments that best fits this dataset. [34] [35] More recently, Ouillon and Sornette developed techniques that model the spatial distribution of events using a mixture of anisotropic Gaussian kernels. [36] Those approaches allow one to identify a large number of faults that are not mapped by more traditional/geological techniques because they do not offer any signature at the surface. Those reconstructed 3D fault networks offer a good correlation with focal mechanisms, but also provide a significant gain when using them as the proxy of earthquakes locations in forecasting experiments. As catalogs can be very large (up to half-million events for Southern California), the catalog condensation technique has been introduced, which allows one to detect probable repeating events and get rid of this redundancy. [37]
In 2016, in collaboration with Prof. Friedemann Freund (with John Scoville) at NASA Ames and GeoCosmo, Sornette (with Guy Ouillon) has launched the Global Earthquake Forecasting Project (GEFS) to advance the field of earthquake prediction. This project is originally rooted in the rigorous theoretical and experimental solid-state physics of Prof. Friedemann Freund, [38] [39] whose theory is able to explain the whole spectrum of electromagnetic type phenomena that have been reported before large earthquakes for decades, if not centuries: when submitting rocks to significant stresses, electrons and positive holes are activated; the latter flow to less stressed domains of the material thus generating large-scale electric currents. Those in turn induce local geoelectric and geomagnetic anomalies, stimulated infrared emission, air ionization, increase levels of ozone and carbon monoxide. All those fluctuations are currently measured using ground stations or remote sensing technologies. There are innumerable reports of heterogeneous types of precursory phenomena ranging from emission of electromagnetic waves from ultralow frequency (ULF) to visible (VIS) and near-infrared (NIR) light, electric field and magnetic field anomalies of various kinds (see below), all the way to unusual animal behavior, which has been reported again and again.
Space and ground anomalies preceding and/or contemporaneous to earthquakes include: (Satellite Component) 1. Thermal Infrared (TIR) anomalies 2. Total Electron Content (TEC) anomalies 3. Ionospheric tomography 4. Ionospheric electric field turbulences 5. Atmospheric Gravity Waves (AGW) 6. CO release from the ground 7. Ozone formation at ground level 8. VLF detection of air ionization 9. Mesospheric lightning 10. Lineaments in the VIS-NIR;
Ground Station Component: 1. Magnetic field variations 2. ULF emission from within the Earth crust 3. Tree potentials and ground potentials 4. Soil conductivity changes 5. Groundwater chemistry changes 6. Trace gas release from the ground 7. Radon emanation from the ground 8. Air ionization at the ground surface 9. Sub-ionospheric VLF/ELF propagation 10. Nightglow
These precursory signals are intermittent and seem not to occur systematically before every major earthquake. Researchers have not been able to explain and exploit them satisfactorily, but never together. Unfortunately, there is no worldwide repository for such data, and those databases are most often under-exploited using too simplistic analyses, or neglecting cross-correlations among them (most often because such data are acquired and possessed by distinct and competing institutions). The GEFS stands as a revolutionary initiative with the following goals: (i) initiate collaborations with many datacenters across the world to unify competences; (ii) propose a collaborative platform (InnovWiki, developed at ETH Zürich) to develop a mega repository of data and tools of analysis; (iii) develop and test rigorously real-time, high-dimension multivariate algorithms to predict earthquakes (location, time and magnitude) using all available data.
In 2004, Sornette used Amazon.com sales data to create a mathematical model for predicting bestseller potential based on very early sales results. [40] [41] [42] This was further developed to characterise the dynamics of success of YouTube videos. [43] This provides a general framework to analyse precursory and aftershock properties of shocks and ruptures in finance, material rupture, earthquakes, amazon.com sales: his work has documented ubiquitous power laws similar to the Omori law in seismology that allow one to distinguish between external shocks and endogenous self-organization. [44]
With collaborators, Sornette has extensively contributed to the application and generalisation of the logistic function (and equation). Applications include tests of chaos of the discrete logistic map, [45] [46] an endo-exo approach to the classification of diseases, [47] [48] the introduction of delayed feedback of population on the carrying capacity to capture punctuated evolution, [49] [50] symbiosis, [51] [52] [53] deterministic dynamical models of regime switching between conventions and business cycles in economic systems, [54] [55] the modelling of periodically collapsing bubbles, [56] interactions between several species via the mutual dependences on their carrying capacities. [57]
Another application is a methodology to determine the fundamental value of firms in the social networking sector, such as Facebook, Groupon, LinkedIn Corp., Pandora Media Inc, Twitter, Zynga and more recently the question of what justifies the skyrocketing values of the unicorn (finance) companies. The key idea proposed by Cauwels and Sornette [58] is that revenues and profits of a social-networking firm are inherently linked to its user basis through a direct channel that has no equivalent in other sectors; the growth of the number of users can be calibrated with standard logistic growth models and allows for reliable extrapolations of the size of the business at long time horizons. With their PhD student, they have applied this methodology to the valuation of Zynga before its IPO and have shown its value by presenting ex-ante forecasts leading to a successful trading strategy. [59] A recent application to the boom of so-called "unicorns", name given to the start-ups that are valued over $1 billion, such as Spotify's and Snapchat, are found in this master thesis. [60]
He has contributed theoretical models, empirical tests of the detection and operational implementation of forecasts of financial bubbles. [61] [62] [63] [64]
By combining (i) the economic theory of rational expectation bubbles, (ii) behavioral finance on imitation and herding of investors and traders and (iii) the mathematical and statistical physics of bifurcations and phase transitions, he has pioneered the log-periodic power law singularity (LPPLS) model of financial bubbles. The LPPLS model considers the faster-than-exponential (power law with finite-time singularity) increase in asset prices decorated by accelerating oscillations as the main diagnostic of bubbles. [65] It embodies the effect of positive feedback loops of higher return anticipations competing with negative feedback spirals of crash expectations. The LPPLS model was first proposed in 1995 to predict the failure of critical pressure tanks embarked on the European Ariane rocket [66] and as a theoretical formulation of the acceleration moment release to predict earthquakes. [67] The LPPLS model was then proposed to also apply to model financial bubbles and their burst by Sornette, Johansen and Bouchaud [68] and independently by Feigenbaum and Freund. [69] The formal analogy between mechanical ruptures, earthquakes and financial crashes was further refined within the rational expectation bubble framework of Blanchard and Watson [70] by Johansen, Ledoit and Sornette. [71] [72] This approach is now referred to in the literature as the JLS model. Recently, Sornette has added the S to the LPPL acronym of "log-periodic power law" to make clear that the "power law" part should not be confused with power law distributions: indeed, the "power law" refers to the hyperbolic singularity of the form , where is the logarithm of the price at time , and is the critical time of the end of the bubble.
In August 2008, in reaction to the then pervasive claim that the financial crisis could not have been foreseen, a view that he has combatted vigorously, [73] he has set up the Financial Crisis Observatory. [74] The Financial Crisis Observatory (FCO) is a scientific platform aimed at testing and quantifying rigorously, in a systematic way and on a large scale the hypothesis that financial markets exhibit a degree of inefficiency and a potential for predictability, especially during regimes when bubbles develop. The FCO evolved from ex-post analyses of many historical bubbles and crashes to previous and continuing ex-ante predictions of the risks of bubbles before their actual occurrences (including the US real estate bubble ending in mid-2006, [75] the Oil bubble bursting in July 2008, [76] the Chinese stock market bubbles [77] [78] ).
The FCO also launched a design (called the "financial bubble experiments") of ex-ante reports of bubbles where the digital authentication key of a document with the forecasts was published on the internet. The content of the document was only published after the event has passed to avoid any possible impact of the publication of the ex-ante prediction on the final outcome. Additionally, there was full transparency using one single communication channel. [79] [80] [81]
Since October 2014, each month, he publishes with his team a Global Bubble Status Report, the FCO Cockpit, which discusses the historical evolution of bubbles in and between different asset classes and geographies. It is the result of an extensive analysis done on the historical time series of approximately 430 systemic assets and 835 single stocks worldwide. The systemic assets are bond, equity and commodity indices and a selection of currency pairs. The single stocks are mainly US and European, equities. The monthly FCO cockpit reports are usually divided into two parts: the first part presents the state of the world, based on the analysis of the systemic assets, including stock and bond indices, currencies and commodities; the second part zooms in on the bubble behavior of single stocks by calculating the bubble warning indicators as well as two financial strength indicators, which indicate the fundamental value of the stock and the growth capability respectively. The stocks are the constituents of the Stoxx Europe 600, the S&P 500 and the Nasdaq 100 indices. These indicators provide a stock classification into four quadrants: Quadrant 1: Stocks with a strong positive bubble score and a strong value score; Quadrant 2: Stocks with a strong positive bubble score and a weak value score; Quadrant 3: Stocks with a strong negative bubble score and a weak value score; Quadrant 4: Stocks with strong negative bubble score and a strong financial strength. These four quadrants are used to construct four benchmark portfolio each month and are followed to test for their performance. The goal is to establish a long track record to continue testing the FCO hypotheses.
Inspired by the research of Ernst Fehr and his collaborators, Darcet and Sornette proposed that the paradox of human cooperation and altruism (without kinship, direct or indirect reciprocity) emerges naturally by an evolutionary feedback selection mechanism. [82] The corresponding generalised cost-benefit accounting equation has been tested and supported by simulations of an agent-based model mimicking the evolution selection pressure of our ancestors: [83] [84] starting with a population of agents with no propensity for cooperation and altruistic punishment, simple rules of selection by survival in interacting groups lead to the emergence of a level of cooperation and altruistic punishment in agreement with experimental findings. [85]
Stimulated by Roy Baumeister's book Is There Anything Good About Men?: How Cultures Flourish by Exploiting Men (Oxford University Press; 2010), with his PhD student, M. Favre, Sornette developed a very simple agent-based model linking together quantitatively several unlikely pieces of data, such as differences between men and women, the time to our most recent common ancestors, and gender differences in the proportions of ancestors of the present human population. The question of whether men and women are innately different has occupied the attention and concern of psychologists for over a century by now. Most researchers assume that evolution contributed to shaping any innate differences, presumably by means of reproductive success. Therefore, insofar as the reproductive contingencies were different for men and women, the psychological consequences and adaptations stemming from natural selection would differ by gender. For that reason, new information about gender differences in reproductive success in our biological past is valuable. Favre and Sornette showed that the highly asymmetric investment cost for reproduction between males and females, the special role of females as sole child bearers, together with a high heterogeneity of males' fitnesses driven by females' selection pressure, was sufficient to explain quantitatively the fact that the present human population of Earth was descended from more females than males, at about a 2:1 ratio, [86] with however a broad distribution of possible values (the ratio 2:1 being the median in the ensemble of populations simulated by Favre and Sornette).
To describe the inherent sociability of Homo Sapiens, the UCLA professor of anthropology, Alan Fiske, has theorised that all human interactions can be decomposed into just four "relational models", or elementary forms of human relations: communal sharing, authority ranking, equity matching and market pricing (to these are added the limiting cases of asocial and null interactions, whereby people do not coordinate with reference to any shared principle). [87] With M. Favre, Sornette introduced the simplest model of dyadic social interactions and established its correspondence with Fiske's relational models theory (RMT). [88] Their model is rooted in the observation that each individual in a dyadic interaction can do either the same thing as the other individual, a different thing or nothing at all. The relationships generated by this representation aggregate into six exhaustive and disjoint categories that match the four relational models, while the remaining two correspond to the asocial and null interactions defined in RMT. The model can be generalised to the presence of N social actions. This mapping allows one to infer that the four relational models form an exhaustive set of all possible dyadic relationships based on social coordination, thus explaining why there could exist just four relational models.
He has developed the dragon king theory of extreme events. [89] [90] The term "dragon-kings" (DK) embodies a double metaphor implying that an event is both extremely large (a "king" [91] ), and born of unique origins ("dragon") relative to its peers. The hypothesis advanced in [92] is that DK events are generated by distinct mechanisms that intermittently amplify extreme events, leading to the generation of runaway disasters as well as extraordinary opportunities on the upside. He formulated the hypothesis that DK could be detected in advance by the observation of associated precursory signs. [93] [94]
Together with Monika Gisler, he introduced the social bubble hypothesis [95] in a form that can be scrutinized methodically: [96] [97] [98] [99] strong social interactions between enthusiastic supporters of an idea/concept/project weave a network based on positive feedbacks, leading to widespread endorsement and extraordinary commitment by those involved in the respective project beyond what would be rationalized by a standard cost-benefit analysis. [100] The social bubble hypothesis does not cast any value system, however, notwithstanding the use of the term "bubble," which is often associated with a negative outcome. Rather, it identifies the types of dynamics that shape scientific or technological endeavors. In other words, according to the social bubble hypothesis, major projects do in general proceed via a social bubble mechanism. In other words, it is claimed that most of the disruptive innovations go through such a social bubble dynamics.
The social bubble hypothesis is related to Schumpeter’s famous creative destruction and to the “technological economic paradigm shift” of the social economist Carlota Perez [101] [102] who studies bubbles as antecedents of “techno-economic paradigm shifts.” Drawing from his professional experience as a venture capitalist, William H. Janeway similarly stresses the positive role of asset bubbles in financing technological innovations. [103]
With his Russian colleague, V.I. Yukalov, he has introduced the "quantum decision theory", [104] with the goal of establishing an holistic theoretical framework of decision making. Based on the mathematics of Hilbert spaces, it embraces uncertainty and enjoys non-additive probability for the resolution of complex choice situations with interference effects. The use of Hilbert spaces constitutes the simplest generalisation of the probability theory axiomatised by Kolmogorov [105] for real-valued probabilities to probabilities derived from algebraic complex number theory. By its mathematical structure, quantum decision theory aims at encompassing the superposition processes occurring down to the neuronal level. Numerous behavioral patterns, including those causing paradoxes within other theoretical approaches, are coherently explained by quantum decision theory. [104]
The version of Quantum Decision Theory (QDT) developed by Yukalov and Sornette principally differs from all other approaches just mentioned in two respects. First, QDT is based on a self-consistent mathematical foundation that is common for both quantum measurement theory and quantum decision theory. Starting from the von Neumann (1955) theory of quantum measurements, [106] Yukalov and Sornette have generalized it to the case of uncertain or inconclusive events, making it possible to characterize uncertain measurements and uncertain prospects. Second, the main formulas of QDT are derived from general principles, giving the possibility of general quantitative predictions.
With Wei-Xing Zhou, he has introduced the "thermal optimal path" method as a method to quantify the dynamical evolution of lead-lag structures between two time series. The method consists of constructing a distance matrix based on the matching of all sample data pairs between the two time series, as in recurrence plots. Then, the lag–lead structure is searched for as the optimal path in the distance matrix landscape that minimizes the total mismatch between the two time series, and that obeys a one-to-one causal matching condition. The problem is solved mathematically by transfer matrix techniques, matching the TOP method to the problem of random directed polymers interacting with random substrates. Applications include the study of the relationships between inflation, inflation change, GDP growth rate and unemployment rate, [107] [108] volatilities of the US inflation rate versus economic growth rates, [109] the US stock market versus the Federal funds rate and Treasury bond yields [110] and the UK and US real-estate versus monetary policies. [111]
A recent improvement of TOP has been introduced, called TOPS (symmetric thermal optimal path), [111] which complement TOP by imposing that the lead-lag relationship should be invariant with respect to a time reversal of the time series after a change of sign. This means that, if 'X comes before Y', this transforms into 'Y comes before X' under a time reversal. The TOPS approach stresses the importance of accounting for change of regimes, so that similar pieces of information or policies may have drastically different impacts and developments, conditional on the economic, financial and geopolitical conditions.
In 2015, in reaction to the extraordinary pressure on the Swiss franc and the general debate that a strong Swiss franc is a problem for Switzerland, he has introduced the contrarian proposition that a strong Swiss franc is an extraordinary opportunity for Switzerland. He argues that the strong Swiss franc is the emergence (in the sense of complex adaptive systems) of the aggregate qualities of Switzerland, its political systems, its infrastructure, its work organisation and ethics, its culture and much more. He proposes to "mine" Swiss francs to stabilise the exchange against the euro to an economically and politically consensus (that could be around 1.20–1.25 ChF per euro) and buy as much euros and dollars as is necessary for this. The proceeds will be reinvested in a Swiss Sovereign Fund, which could reach a size of one trillion euros or more, according to the strategies used by the Norwegian sovereign fund, the Singaporean sovereign funds and university endowment funds such as Harvard or Stanford. A full English version and a presentation can be found at . A summary of the arguments has been presented in the German-speaking media [112] .
In physical cosmology, the Copernican principle states that humans, on the Earth or in the Solar System, are not privileged observers of the universe, that observations from the Earth are representative of observations from the average position in the universe. Named for Copernican heliocentrism, it is a working assumption that arises from a modified cosmological extension of Copernicus' argument of a moving Earth.
Earthquake prediction is a branch of the science of seismology concerned with the specification of the time, location, and magnitude of future earthquakes within stated limits, and particularly "the determination of parameters for the next strong earthquake to occur in a region". Earthquake prediction is sometimes distinguished from earthquake forecasting, which can be defined as the probabilistic assessment of general earthquake hazard, including the frequency and magnitude of damaging earthquakes in a given area over years or decades. Not all scientists distinguish "prediction" and "forecast", but the distinction is useful.
In seismology, an aftershock is a smaller earthquake that follows a larger earthquake, in the same area of the main shock, caused as the displaced crust adjusts to the effects of the main shock. Large earthquakes can have hundreds to thousands of instrumentally detectable aftershocks, which steadily decrease in magnitude and frequency according to a consistent pattern. In some earthquakes the main rupture happens in two or more steps, resulting in multiple main shocks. These are known as doublet earthquakes, and in general can be distinguished from aftershocks in having similar magnitudes and nearly identical seismic waveforms.
Econophysics is a non-orthodox interdisciplinary research field, applying theories and methods originally developed by physicists in order to solve problems in economics, usually those including uncertainty or stochastic processes and nonlinear dynamics. Some of its application to the study of financial markets has also been termed statistical finance referring to its roots in statistical physics. Econophysics is closely related to social physics.
In particle physics, a glueball is a hypothetical composite particle. It consists solely of gluon particles, without valence quarks. Such a state is possible because gluons carry color charge and experience the strong interaction between themselves. Glueballs are extremely difficult to identify in particle accelerators, because they mix with ordinary meson states. In pure gauge theory, glueballs are the only states of the spectrum and some of them are stable.
In quantum field theory, a false vacuum is a hypothetical vacuum that is relatively stable, but not in the most stable state possible. In this condition it is called metastable. It may last for a very long time in this state, but could eventually decay to the more stable one, an event known as false vacuum decay. The most common suggestion of how such a decay might happen in our universe is called bubble nucleation – if a small region of the universe by chance reached a more stable vacuum, this "bubble" would spread.
The VAN method – named after P. Varotsos, K. Alexopoulos and K. Nomicos, authors of the 1981 papers describing it – measures low frequency electric signals, termed "seismic electric signals" (SES), by which Varotsos and several colleagues claimed to have successfully predicted earthquakes in Greece. Both the method itself and the manner by which successful predictions were claimed have been severely criticized. Supporters of VAN have responded to the criticism but the critics have not retracted their views.
Paul Joseph Steinhardt is an American theoretical physicist whose principal research is in cosmology and condensed matter physics. He is currently the Albert Einstein Professor in Science at Princeton University, where he is on the faculty of both the Departments of Physics and of Astrophysical Sciences.
In condensed matter physics, Anderson localization is the absence of diffusion of waves in a disordered medium. This phenomenon is named after the American physicist P. W. Anderson, who was the first to suggest that electron localization is possible in a lattice potential, provided that the degree of randomness (disorder) in the lattice is sufficiently large, as can be realized for example in a semiconductor with impurities or defects.
In applied mathematics, highly optimized tolerance (HOT) is a method of generating power law behavior in systems by including a global optimization principle. It was developed by Jean M. Carlson and John Doyle in the early 2000s. For some systems that display a characteristic scale, a global optimization term could potentially be added that would then yield power law behavior. It has been used to generate and describe internet-like graphs, forest fire models and may also apply to biological systems.
Eternal inflation is a hypothetical inflationary universe model, which is itself an outgrowth or extension of the Big Bang theory.
A multifractal system is a generalization of a fractal system in which a single exponent is not enough to describe its dynamics; instead, a continuous spectrum of exponents is needed.
Alexander Balankin is a Mexican scientist of Russian origin whose work in the field of fractal mechanics and its engineering applications won him the UNESCO Science Prize in 2005.
In physical cosmology, fractal cosmology is a set of minority cosmological theories which state that the distribution of matter in the Universe, or the structure of the universe itself, is a fractal across a wide range of scales. More generally, it relates to the usage or appearance of fractals in the study of the universe and matter. A central issue in this field is the fractal dimension of the universe or of matter distribution within it, when measured at very large or very small scales.
Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign a fractal dimension and other fractal characteristics to a dataset which may be a theoretical dataset, or a pattern or signal extracted from phenomena including topography, natural geometric objects, ecology and aquatic sciences, sound, market fluctuations, heart rates, frequency domain in electroencephalography signals, digital images, molecular motion, and data science. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. Fractal analysis is valuable in expanding our knowledge of the structure and function of various systems, and as a potential tool to mathematically assess novel areas of study. Fractal calculus was formulated which is a generalization of ordinary calculus.
Hořava–Lifshitz gravity is a theory of quantum gravity proposed by Petr Hořava in 2009. It solves the problem of different concepts of time in quantum field theory and general relativity by treating the quantum concept as the more fundamental so that space and time are not equivalent (anisotropic) at high energy level. The relativistic concept of time with its Lorentz invariance emerges at large distances. The theory relies on the theory of foliations to produce its causal structure. It is related to topologically massive gravity and the Cotton tensor. It is a possible UV completion of general relativity. Also, the speed of light goes to infinity at high energies. The novelty of this approach, compared to previous approaches to quantum gravity such as Loop quantum gravity, is that it uses concepts from condensed matter physics such as quantum critical phenomena. Hořava's initial formulation was found to have side-effects such as predicting very different results for a spherical Sun compared to a slightly non-spherical Sun, so others have modified the theory. Inconsistencies remain, though progress was made on the theory. Nevertheless, observations of gravitational waves emitted by the neutron-star merger GW170817 contravene predictions made by this model of gravity. Some have revised the theory to account for this.
In strong interaction physics, light front holography or light front holographic QCD is an approximate version of the theory of quantum chromodynamics (QCD) which results from mapping the gauge theory of QCD to a higher-dimensional anti-de Sitter space (AdS) inspired by the AdS/CFT correspondence proposed for string theory. This procedure makes it possible to find analytic solutions in situations where strong coupling occurs, improving predictions of the masses of hadrons and their internal structure revealed by high-energy accelerator experiments. The most widely used approach to finding approximate solutions to the QCD equations, lattice QCD, has had many successful applications; however, it is a numerical approach formulated in Euclidean space rather than physical Minkowski space-time.
Modern searches for Lorentz violation are scientific studies that look for deviations from Lorentz invariance or symmetry, a set of fundamental frameworks that underpin modern science and fundamental physics in particular. These studies try to determine whether violations or exceptions might exist for well-known physical laws such as special relativity and CPT symmetry, as predicted by some variations of quantum gravity, string theory, and some alternatives to general relativity.
Dragon king is a double metaphor for an event that is both extremely large in size or effect and born of unique origins relative to its peers. DK events are generated by or correspond to mechanisms such as positive feedback, tipping points, bifurcations, and phase transitions, that tend to occur in nonlinear and complex systems, and serve to amplify Dragon king events to extreme levels. By understanding and monitoring these dynamics, some predictability of such events may be obtained.
Dislocation avalanches are rapid discrete events during plastic deformation, in which defects are reorganized collectively. This intermittent flow behavior has been observed in microcrystals, whereas macroscopic plasticity appears as a smooth process. Intermittent plastic flow has been observed in several different systems. In AlMg Alloys, interaction between solute and dislocations can cause sudden jump during dynamic strain aging. In metallic glass, it can be observed via shear banding with stress localization; and single crystal plasticity, it shows up as slip burst. However, analysis of the events with orders-magnitude difference in sizes with different crystallographic structure reveals power-law scaling between the number of events and their magnitude, or scale-free flow.
{{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: Cite journal requires |journal=
(help){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link)