Shelia Guberman | |
---|---|
Born | |
Citizenship | USSR, United States |
Scientific career | |
Fields | Nuclear Physics, Computer Science, Geology, Geophysics, Artificial Intelligence, Psychology of Perception |
Shelia Guberman (born 25 February 1930, Ukraine, USSR) is a scientist in computer science, nuclear physics, geology, geophysics, medicine, artificial intelligence and perception. He proposed the D-waves theory of Earth seismicity, [1] algorithms of Gestalt-perception (1980) and Image segmentation, and programs for the technology of oil and gas fields exploration (1985).
He is the son of Aizik Guberman (writer, poet) and his wife Etya (teacher). From 1947 to 1952 Guberman studied at the Institute of Electrical Communications, Odessa, USSR, graduating in radio engineering. From 1952 to 1958 he worked as field geophysicist in the Soviet oil industry. From 1958 to 1961 he studied as a postgraduate at the Oil and Gas Institute in Moscow. In 1962 he received a PhD. in nuclear physics, followed by a PhD. in applied mathematics in 1971. In 1971 he was appointed for full professorship in computer science. After authoring the first applied pattern recognition program in 1962, Guberman specialized in artificial intelligence implementing principles of Gestalt perception in computer programs for geological data analysis. In 1966 he was invited by the outstanding mathematician of the XX century Prof. Israel Gelfand to lead the Artificial Intelligence team in Keldysh Institute of Applied Mathematics of the Russian Academy of Sciences. He applied the pattern recognition technology to earthquake prediction, oil and gas exploration, handwriting recognition, speech compression, and medical imaging. From 1989 to 1992 Guberman held the chair professorship at Moscow Open University (Department of Geography). Since 1992 he is living in the US. Guberman is the inventor of the handwriting recognition technology implemented in the commercial product by the company "Paragraph International" founded by Stepan Pachikov, and used today by Microsoft in Windows CE. [2] He is author of core technologies for five US companies, and owns a patent on speech compression. [3]
The common approach to computer handwriting recognition was computer learning on a set of examples (characters or words) presented as visual objects. Guberman proposed that it is more adequate for the psycho-physiology of human perception to present the script as a kinematic object, a gesture, i.e. synergy of movements of the stylus producing the script. [4]
The handwriting consists of 7 primitives. The variations, which characters undergo during the writing, are restricted by the rule: each element can be transformed only into his neighbor in the ordered sequence of primitives. During the evolution of Latin-like writing acquired resistance to natural variations in character shape: when one of the primitives is substituted by his neighbor the interpretation of the character does not change to another one.
Based on this approach two USA companies Paragraph and Parascript developed the first commercial products for on-line and off-line free handwriting recognition, which were licensed by Apple, Microsoft, Boeing, Siemens and others. [5] [6] "Most commercially available natural handwriting software is based on ParaGraph or Parascript technology”. [7]
The hypothesis that humans perceive the handwriting as well as other linear drawings (in general – the communication signals) not in visual modality but in the motor modality [8] was later confirmed by the discovery of mirror neurons. The difference is that in the classical mirroring phenomena the motor response appears in parallel with the observed movement (“immediate action perception”), and during the handwriting recognition the static stimulus is transformed into a time process by tracing the path of the pen on the paper. In both cases the observer is trying to understand the intention of the correspondent: “the understanding of what the person is doing and why he is doing it, is acquired through a mechanism that directly transforms visual information into a motor format”. [9]
The speech is traditionally presented as a time sequence of phonemes - vowels and consonants. [10] Each vowel is mainly determined by the relationship between the volume sizes of the front and the back of the voice tract. The ratio is defined by 1) horizontal position of tongue (back–forth), 2) the position of the lips (back-forth), and 3) the size of pharynx that can extend the cavity of the voice tract far back. Most consonants can be described with 3 parameters: 1) place of articulation (lips, teeth and so on), 2) time pattern of interaction with the voice tract (explosive or not), and 3) voiced or not voiced sound. Because of the inertia of the articulatory organs (tongue, lips, jaw) any phoneme interferes with the neighbors and changes its sounding (co-articulation). As a result, each phoneme sounds different in different context. Guberman presents the parallel model of speech production. [11] It states that vowels and consonants are generated not in sequence but in parallel. The two channels manage two different gropes of muscles, which together define the geometry of the voice tract, and, respectively, voice signal. The separation is possible because the generation of vowels and of consonants involves different muscles. For the vowels [o], [u] the lips are managed by muscles Mentalis and Orbicularis Oris for protrusion and rounding, and for [i], [e] by Buccinator and Risorius for retracting the lips. The tongue participate in creating the vowels by innervating Superior Longitudinal and Vertical for lifting and for moving the whole tongue back and forth, and Genioglossus for all consonants articulated in the front of the mouth )when jaw is fixed). [12] For the lip consonant [p], [b], [v], [f] the lips are managed by Labii Inferioris and Orbicularis Oris muscles for moving the lips and the jaw up and down, and Zygomaticus Minor for moving the lower lip back for [v], [f].
From the hypothesis of Parallel Phonetic Coding follows:
1. Because the vowels are defined as a particular ratio of front and back volumes of voice tract, the vowels are present at any moment of the speech (even during silence – the neutral vowel [ə] when no muscle of the voice tract is innervated).
2. Any consonant in speech appears on the background of a vowel. The last consonant in the word, is pronounced on the background of the neutral vowel [ə]. In clusters the consonants are produced in parallel with [ə] except the last one. In the past in Russian writing after consonant at the end of the word has to be written a special character denoting the neutral vowel – Ъ (the rule was canceled in 1918).
3. The correct written code for words soda and word is shown in (N) where the number of vowels in syllable reflects the relative duration of the vowel. Such coding is used in Hebrew: in the word יצֵירֵ (peace) two points under characters denote vowel [e]). In Arabic the two channels carry different functions: the consonant stream keeps the meaning, (the root), and the vowel stream either modify the meaning of the root, or expresses a grammatical category: kitab means “book”; katib “writer”; ia-ktub-u “he is writing”; ma-ktab “school”.
In the '70s and '80s Guberman developed an artificial intelligence software and the appropriate technology for geological applications, and used it for predicting places of giant oil/gas deposits. [13] [14] [15] [16]
In 1986 the team published a prognostic map for discovering giant oil and gas fields at the Andes in South America [17] based on abiogenic petroleum origin theory. The model proposed by Prof. Yury Pikovsky (Moscow State University) assumes that petroleum moves from the mantel to the surface through permeable channels created at the intersection of deep faults. [18] The technology uses 1) maps of morphostructural zoning (method proposed and developed by Prof. E.Rantsman), which outlines the morphostructural nodes (intersections of faults), and 2) pattern recognition program that identify nodes containing giant oil/gas fields. It was forecasted that eleven nodes, which had not been developed at that time, contain giant oil or gas fields. These 11 sites covered only 8% of the total area of all the Andes basins. 30 years later (in 2018) was published the result of comparing the prognosis and the reality. [19] Since publication of the prognostic map in 1986, only six giant oil/gas fields were discovered in the Andes region: Cano–Limon, Cusiana, Capiagua, and Volcanera (Llanos basin, Colombia), Camisea (Ukayali basin, Peru), and Incahuasi (Chaco basin, Bolivia). All discoveries were made in places shown on the 1986 prognostic map as promising areas.
The result is convincingly positive, and this is a strong contribution in support of abiogenic theory of oil origin.
In the middle of the 20th century, the attention of seismologists was attracted by the phenomenon of chains of earthquakes consistently arising along big faults. [20] [21] Later it was interpreted as waves of tectonic strain [22] In 1975 Guberman proposed the D-waves theory that separates the local processes of stress accumulation and the triggering of earthquakes. [23] The basic postulates of this theory are: a) a strong earthquake changes the distribution of mass in the Earth’s core and accordingly its rate of rotation ω; b) at times when ω reaches a local minimum the disturbances occur at both poles, which propagate along meridians at a constant rate of 0.15°/year (D waves); c) A strong earthquake occurs at the place where tectonic stresses have accumulated, and at a time when two D waves (from poles N and S) have met at that point. (Fig ).
This hypothesis and its consequences were supported by seismological data.
1. The postulate c) is presented at the plot ( ) where φ is a latitude of a strong earthquake, and T is its time of occurrence. Each line presents a D-wave travelling the Earth with constant speed 0.15°/year triggering along the way strong earthquakes. The dots present the strong earthquake in the Aleutian Islands and Alaska (magnitude M ≥ 7.0). Similar results were demonstrated for California, South-eastern Europe, Asia Minor, Southern Chile, South Sandwich Island, New Zealand, France and Italy [24] The probability that this can happen by chance is < 0,025 in each case.
2. The source of irregularity in Earth rotation could be a strong earthquake, which displaced huge masses of rocks, and
for keeping the rotational moment of the Earth constant the angular speed of rotation ω has to be changed [25] [26] Because of the low speed of the D-waves (0.15°/year), it takes more than 200 years after occurrence to reach the areas where earthquakes with magnitude M >8 occur. To test the postulate b) very long time interval of seismological records is needed. In China, the seismic history has been documented for a very long period of time (from 180 A.D.). The time-space relations between the 6 strongest documented earthquakes in China are presented at the plot. The earthquake #1 created at the poles two D-waves. The one moves from the North Pole, and in 332 years triggered the earthquake #2; the second wave moves from the South Pole, and in 858 years achieved the location of the earthquake #4, and so on (see the graph). In total, the average deviation of the position of the D-wave at the time of the event and the location of the triggered earthquake is 0.4°, which is less than the error in determining the position of the epicenter of the historical earthquakes. 3. From the hypothesis of D-waves it follows that epicenters of the strongest earthquakes can predominantly occur at the discrete D-latitudes (90/2n)·i (i = 0, 1, 2, …), with n ≤ 5. [27] To test this statement the areas of high seismicity on the Earth were divided in stripes parallel to D-latitudes of order <= 4 each 5.625° width (see the map).
In 43 regions earthquakes with M ≥ 8.0 occurred, in each region the strongest earthquake was chosen, and in 31 regions the epicenter of the strongest earthquake are located close to the D-latitude, i.e. located in the stripe around the D-latitude 1° wide. The stripe is 1° wide, and occupies 0.36 part of the area of each region, which is 5.625° wide. If the epicenters are randomly scattered over each of the 43 regions, the expected number of epicenters, which will occur close to the D-latitude would be 43 x 0.36 = 15, and the probability that 31 epicenter will be located inside the stripe is less than 0.005.
The earthquakes are an essential part of tectonic movements on the Earth. It was shown that strong earthquakes occur in the intersection of faults – morphostructural nodes. [17] It means that not only the earthquakes are located near the D-latitudes, but also the big morphostructural knots do as well. Combining it with Prof Pikovsky’s hypothesis that the morphostructural knots are pipes that deliver the oil from the mantle to the crust of the Earth follows that big oil/gas fields has also be predominantly located at the discrete D-latitudes. It was proved in, [28] and the appropriate parameter (distance to the D-latitude) was used in the search for giant oil/gas fields (see above). The fact that the strong earthquakes occur on discrete D-latitudes influences the tectonic configuration of the net of tectonic faults. [29] It was also found that in the morphostructural knots happens most accidents on oil, gas, and water pipelines, and railroad rails. [30]
Two types of treatment exist for patients with hemorrhagic strokes: passive (medicamental) and active (surgical).Prof. E. Kandel [31] (one of the pioneers in surgical treatment of hemorrhagic strokes) turned to the outstanding mathematician Prof. I.Gelfand for help in comparing the effectiveness of these two treatments. Guberman was chosen as the main architect of the project. First, it was decided changing the goal: instead of choosing the best treatment in general finding the best treatment for a particular patient – conservative or operational (“treat the patient not the disease”). For this it was decided to use the pattern recognition technology developed in the past for geology (see above). Two decision rules have to be developed: 1) for predicting the outcome (life or death) of the conservative treatment of the particular patient, 2) for predicting the outcome (life or death) of the surgery of the same patient. The decisions are based on neurological and general symptoms collected at the first 12 hours after the patient arrived in the hospital. The obtained decision rules were preliminary tested for two years: the collected data were send to the computer, and the two prognoses (forecasted outcomes of the operation and the conservative treatment) were placed in the patient’s file. A month later the computer predictions were compared with the outcomes. The overall result – 90% correct predictions. Then followed the clinical implementation: the computer decisions were immediately sent to the surgeon on duty who makes the final decision. In five years 90 patients received computer forecasts. [32] [33] In 16 cases the computer strongly recommended the operation. 11 of them were operated and survived. For 5 patients the computer warning was neglected (for different reasons), and all 5 died. In 5 cases it was strongly recommended avoiding operation. 3 of them were treated accordingly and survive, 2 of them were operated contrary to the computer advice and died.
More than 180 papers published in scientific journals in Russia, US, France, Germany, Italy and Austria.
Selected recent papers on computer science and psychology:
Selected paper on tectonophysics:
Books:
{{citation}}
: CS1 maint: multiple names: authors list (link)An earthquake – also called a quake, tremor, or temblor – is the shaking of the Earth's surface resulting from a sudden release of energy in the lithosphere that creates seismic waves. Earthquakes can range in intensity, from those so weak they cannot be felt, to those violent enough to propel objects and people into the air, damage critical infrastructure, and wreak destruction across entire cities. The seismic activity of an area is the frequency, type, and size of earthquakes experienced over a particular time. The seismicity at a particular location in the Earth is the average rate of seismic energy release per unit volume.
Seismology is the scientific study of earthquakes and the generation and propagation of elastic waves through the Earth or other planetary bodies. It also includes studies of earthquake environmental effects such as tsunamis as well as diverse seismic sources such as volcanic, tectonic, glacial, fluvial, oceanic microseism, atmospheric, and artificial processes such as explosions and human activities. A related field that uses geology to infer information regarding past earthquakes is paleoseismology. A recording of Earth motion as a function of time, created by a seismograph is called a seismogram. A seismologist is a scientist works in basic or applied seismology.
A seismic wave is a mechanical wave of acoustic energy that travels through the Earth or another planetary body. It can result from an earthquake, volcanic eruption, magma movement, a large landslide and a large man-made explosion that produces low-frequency acoustic energy. Seismic waves are studied by seismologists, who record the waves using seismometers, hydrophones, or accelerometers. Seismic waves are distinguished from seismic noise, which is persistent low-amplitude vibration arising from a variety of natural and anthropogenic sources.
The epicenter, epicentre, or epicentrum in seismology is the point on the Earth's surface directly above a hypocenter or focus, the point where an earthquake or an underground explosion originates.
A seismometer is an instrument that responds to ground displacement and shaking such as caused by quakes, volcanic eruptions, and explosions. They are usually combined with a timing device and a recording device to form a seismograph. The output of such a device—formerly recorded on paper or film, now recorded and processed digitally—is a seismogram. Such data is used to locate and characterize earthquakes, and to study the internal structure of Earth.
Seismic tomography or seismotomography is a technique for imaging the subsurface of the Earth with seismic waves produced by earthquakes or explosions. P-, S-, and surface waves can be used for tomographic models of different resolutions based on seismic wavelength, wave source distance, and the seismograph array coverage. The data received at seismometers are used to solve an inverse problem, wherein the locations of reflection and refraction of the wave paths are determined. This solution can be used to create 3D images of velocity anomalies which may be interpreted as structural, thermal, or compositional variations. Geoscientists use these images to better understand core, mantle, and plate tectonic processes.
The abiogenic petroleum origin hypothesis proposes that most of earth's petroleum and natural gas deposits were formed inorganically, commonly known as abiotic oil. Scientific evidence overwhelmingly supports a biogenic origin for most of the world's petroleum deposits. Mainstream theories about the formation of hydrocarbons on earth point to an origin from the decomposition of long-dead organisms, though the existence of hydrocarbons on extraterrestrial bodies like Saturn's moon Titan indicates that hydrocarbons are sometimes naturally produced by inorganic means. A historical overview of theories of the abiogenic origins of hydrocarbons has been published.
The New Madrid Seismic Zone (NMSZ), sometimes called the New Madrid Fault Line, is a major seismic zone and a prolific source of intraplate earthquakes in the Southern and Midwestern United States, stretching to the southwest from New Madrid, Missouri.
Induced seismicity is typically earthquakes and tremors that are caused by human activity that alters the stresses and strains on Earth's crust. Most induced seismicity is of a low magnitude. A few sites regularly have larger quakes, such as The Geysers geothermal plant in California which averaged two M4 events and 15 M3 events every year from 2004 to 2009. The Human-Induced Earthquake Database (HiQuake) documents all reported cases of induced seismicity proposed on scientific grounds and is the most complete compilation of its kind.
Seismic magnitude scales are used to describe the overall strength or "size" of an earthquake. These are distinguished from seismic intensity scales that categorize the intensity or severity of ground shaking (quaking) caused by an earthquake at a given location. Magnitudes are usually determined from measurements of an earthquake's seismic waves as recorded on a seismogram. Magnitude scales vary based on what aspect of the seismic waves are measured and how they are measured. Different magnitude scales are necessary because of differences in earthquakes, the information available, and the purposes for which the magnitudes are used.
Earth Revealed: Introductory Geology, originally titled Earth Revealed, is a 26-part video instructional series covering the processes and properties of the physical Earth, with particular attention given to the scientific theories underlying geological principles. The telecourse was produced by Intelecom and the Southern California Consortium, was funded by the Annenberg/CPB Project, and first aired on PBS in 1992 with the title Earth Revealed. All 26 episodes are hosted by Dr. James L. Sadd, professor of environmental science at Occidental College in Los Angeles, California.
The Richter scale, also called the Richter magnitude scale, Richter's magnitude scale, and the Gutenberg–Richter scale, is a measure of the strength of earthquakes, developed by Charles Richter in collaboration with Beno Gutenberg, and presented in Richter's landmark 1935 paper, where he called it the "magnitude scale". This was later revised and renamed the local magnitude scale, denoted as ML or ML .
A tectonic weapon is a hypothetical device or system which could trigger earthquakes, volcanic eruptions, or other seismic events in specified locations by interfering with the Earth's natural geological processes. It was defined in 1992 by Aleksey Vsevolodovich Nikolayev, corresponding member of the Russian Academy of Sciences: "A tectonic or seismic weapon would be the use of the accumulated tectonic energy of the Earth's deeper layers to induce a destructive earthquake". He added "to set oneself the objective of inducing an earthquake is extremely doubtful." Though no such device is known to have been built, tectonic weapons have occasionally appeared as plot devices in works of fiction.
John "Jack" Ertle Oliver was an American scientist. Oliver, who earned his PhD at Columbia University in 1953, studied earthquakes and ultimately provided seismic evidence supporting plate tectonics. In the 1960s, Oliver and his former graduate student, Bryan Isacks, set up seismographic stations in the South Pacific to record earthquake activity, and the data collected led to the insight that part of the ocean floor was being pushed downward.
Recent advances are improving the speed and accuracy of loss estimates immediately after earthquakes so that injured people may be rescued more efficiently. "Casualties" are defined as fatalities and injured people, which are due to damage to occupied buildings. After major and large earthquakes, rescue agencies and civil defense managers rapidly need quantitative estimates of the extent of the potential disaster, at a time when information from the affected area may not yet have reached the outside world. For the injured below the rubble every minute counts. To rapidly provide estimates of the extent of an earthquake disaster is much less of a problem in industrialized than in developing countries. This article focuses on how one can estimate earthquake losses in developing countries in real time.
Seismic site effects are related to the amplification of seismic waves in superficial geological layers. The surface ground motion may be strongly amplified if the geological conditions are unfavorable. Therefore, the study of local site effects is an important part of the assessment of strong ground motions, seismic hazard and engineering seismology in general. Damage due to an earthquake may thus be aggravated as in the case of the 1985 Mexico City earthquake. For alluvial basins, we may shake a bowl of jelly to model the phenomenon at a small scale.
Alexei Dzhermenovich Gvishiani is a well-known Russian scientist, full member (academician) of the Russian Academy of Sciences (RAS). Chief scientist of the Geophysical Center of RAS. Member of the Scientific Coordinating Council of the Federal Agency of Scientific organizations of Russia (FASO) and the Expert Council of the Russian Scientific Foundation. Foreign member of the Romanian Academy of Engineering and Technical Sciences and the National Academy of Sciences of Ukraine. Doctor Honoris Causa of the National Technological University of Ukraine, professor of Lomonosov Moscow State University and the Paris Institute of Earth Physics of the Earth. Chair of the Russian Geophysical Committee and CODATA Committee of the Russian Academy of Sciences, vice-president of CODATA in 2002-2006. Vice-chair of the WDC Panel of the International Council for Science (ICSU) (1996–2006). Deputy chairman of the Committee of System Analysis, RAS. Vice-president of the Scientific Council International Institute for Applied Systems Analysis (IIASA) and IIASA Program Committee chair (2010–2014). Member of Academia Europaea (2017).
The 1920 Xalapa earthquake was the deadliest in Mexico's history prior to 1985—killing at least 648 people. It occurred on January 3 at 22:25 local time, during a period of political unrest in the country. Mudflows and landslides triggered by the shock destroyed buildings in rural towns across the states of Veracruz and Puebla, causing most of the deaths. The earthquake was attributed to a shallow fault in the Trans-Mexican Volcanic Belt. It measured moment magnitude 6.3–6.4 and had a hypocenter depth of <15 km (9.3 mi). The Mexican government took immediate action in the aftermath—providing assistance and establishing communication services. Severely damaged towns including Xalapa were rebuilt, while others had to be abandoned. Help to survivors also came from civil society groups, civilians, and the Catholic Church. The earthquake's aftershocks were studied by scientists to determine its seismological characteristics.
The 1997 Bojnurd earthquake occurred on 4 February at 14:07 IRST in Iran. The epicenter of the Mw 6.5 earthquake was in the Kopet Dag mountains of North Khorasan, near the Iran–Turkmenistan border, about 579 km (360 mi) northeast of Tehran. The earthquake is characterized by shallow strike-slip faulting in a zone of active faults. Seismic activity is present as the Kopet Dag is actively accommodating tectonics through faulting. The earthquake left 88 dead, 1,948 injured, and affected 173 villages, including four which were destroyed. Damage also occurred in Shirvan and Bojnord counties. The total cost of damage was estimated to be over US$ 30 million.
{{cite book}}
: CS1 maint: multiple names: authors list (link)