20th century in science

Last updated

Science advanced dramatically during the 20th century. There were new and radical developments in the physical, life and human sciences, building on the progress made in the 19th century. [1]

Contents

The development of post-Newtonian theories in physics, such as special relativity, general relativity, and quantum mechanics led to the development of nuclear weapons. New models of the structure of the atom led to developments in theories of chemistry and the development of new materials such as nylon and plastics. Advances in biology led to large increases in food production, as well as the elimination of diseases such as polio.

A massive amount of new technologies were developed in the 20th century. Technologies such as electricity, the incandescent light bulb, the automobile and the phonography, first developed at the end of the 19th century, were perfected and universally deployed. The first airplane flight occurred in 1903, and by the end of the century large airplanes such as the Boeing 777 and Airbus A330 flew thousands of miles in a matter of hours. The development of the television and computers caused massive changes in the dissemination of information.

Astronomy and spaceflight

American Buzz Aldrin during the first moonwalk in 1969. The relatively young aerospace engineering industries rapidly grew in the 66 years after the Wright brothers first flight. Buzz salutes the U.S. Flag.jpg
American Buzz Aldrin during the first moonwalk in 1969. The relatively young aerospace engineering industries rapidly grew in the 66 years after the Wright brothers first flight.

Biology and medicine

A stamp commemorating Alexander Fleming. His discovery of penicillin had changed the world of modern medicine by introducing the age of antibiotics. Faroe stamp 079 europe (fleming).jpg
A stamp commemorating Alexander Fleming. His discovery of penicillin had changed the world of modern medicine by introducing the age of antibiotics.

Notable diseases

Chemistry

The first Solvay Conference was held in Brussels in 1911 and was considered a turning point in the world of physics and chemistry. 1911 Solvay conference.jpg
The first Solvay Conference was held in Brussels in 1911 and was considered a turning point in the world of physics and chemistry.

In 1903, Mikhail Tsvet invented chromatography, an important analytic technique. In 1904, Hantaro Nagaoka proposed an early nuclear model of the atom, where electrons orbit a dense massive nucleus. In 1905, Fritz Haber and Carl Bosch developed the Haber process for making ammonia, a milestone in industrial chemistry with deep consequences in agriculture. The Haber process, or Haber-Bosch process, combined nitrogen and hydrogen to form ammonia in industrial quantities for production of fertilizer and munitions. The food production for half the world's current population depends on this method for producing fertilizer. Haber, along with Max Born, proposed the Born–Haber cycle as a method for evaluating the lattice energy of an ionic solid. Haber has also been described as the "father of chemical warfare" for his work developing and deploying chlorine and other poisonous gases during World War I.

Robert A. Millikan, who is best known for measuring the charge on the electron, won the Nobel Prize in Physics in 1923. Millikan.jpg
Robert A. Millikan, who is best known for measuring the charge on the electron, won the Nobel Prize in Physics in 1923.

In 1905, Albert Einstein explained Brownian motion in a way that definitively proved atomic theory. Leo Baekeland invented bakelite, one of the first commercially successful plastics. In 1909, American physicist Robert Andrews Millikan - who had studied in Europe under Walther Nernst and Max Planck - measured the charge of individual electrons with unprecedented accuracy through the oil drop experiment, in which he measured the electric charges on tiny falling water (and later oil) droplets. His study established that any particular droplet's electrical charge is a multiple of a definite, fundamental value — the electron's charge — and thus a confirmation that all electrons have the same charge and mass. Beginning in 1912, he spent several years investigating and finally proving Albert Einstein's proposed linear relationship between energy and frequency, and providing the first direct photoelectric support for Planck's constant. In 1923 Millikan was awarded the Nobel Prize for Physics.

In 1909, S. P. L. Sørensen invented the pH concept and develops methods for measuring acidity. In 1911, Antonius Van den Broek proposed the idea that the elements on the periodic table are more properly organized by positive nuclear charge rather than atomic weight. In 1911, the first Solvay Conference was held in Brussels, bringing together most of the most prominent scientists of the day. In 1912, William Henry Bragg and William Lawrence Bragg proposed Bragg's law and established the field of X-ray crystallography, an important tool for elucidating the crystal structure of substances. In 1912, Peter Debye develops the concept of molecular dipole to describe asymmetric charge distribution in some molecules.

In 1913, Niels Bohr, a Danish physicist, introduced the concepts of quantum mechanics to atomic structure by proposing what is now known as the Bohr model of the atom, where electrons exist only in strictly defined circular orbits around the nucleus similar to rungs on a ladder. The Bohr Model is a planetary model in which the negatively charged electrons orbit a small, positively charged nucleus similar to the planets orbiting the Sun (except that the orbits are not planar) - the gravitational force of the solar system is mathematically akin to the attractive Coulomb (electrical) force between the positively charged nucleus and the negatively charged electrons.

In 1913, Henry Moseley, working from Van den Broek's earlier idea, introduces concept of atomic number to fix inadequacies of Mendeleev's periodic table, which had been based on atomic weight. The peak of Frederick Soddy's career in radiochemistry was in 1913 with his formulation of the concept of isotopes, which stated that certain elements exist in two or more forms which have different atomic weights but which are indistinguishable chemically. He is remembered for proving the existence of isotopes of certain radioactive elements, and is also credited, along with others, with the discovery of the element protactinium in 1917. In 1913, J. J. Thomson expanded on the work of Wien by showing that charged subatomic particles can be separated by their mass-to-charge ratio, a technique known as mass spectrometry.

In 1916, Gilbert N. Lewis published his seminal article "The Atom of the Molecule", which suggested that a chemical bond is a pair of electrons shared by two atoms. Lewis's model equated the classical chemical bond with the sharing of a pair of electrons between the two bonded atoms. Lewis introduced the "electron dot diagrams" in this paper to symbolize the electronic structures of atoms and molecules. Now known as Lewis structures, they are discussed in virtually every introductory chemistry book. Lewis in 1923 developed the electron pair theory of acids and base: Lewis redefined an acid as any atom or molecule with an incomplete octet that was thus capable of accepting electrons from another atom; bases were, of course, electron donors. His theory is known as the concept of Lewis acids and bases. In 1923, G. N. Lewis and Merle Randall published Thermodynamics and the Free Energy of Chemical Substances, first modern treatise on chemical thermodynamics.

The 1920s saw a rapid adoption and application of Lewis's model of the electron-pair bond in the fields of organic and coordination chemistry. In organic chemistry, this was primarily due to the efforts of the British chemists Arthur Lapworth, Robert Robinson, Thomas Lowry, and Christopher Ingold; while in coordination chemistry, Lewis's bonding model was promoted through the efforts of the American chemist Maurice Huggins and the British chemist Nevil Sidgwick.

Quantum chemistry

Some view the birth of quantum chemistry in the discovery of the Schrödinger equation and its application to the hydrogen atom in 1926.[ citation needed ] However, the 1927 article of Walter Heitler and Fritz London [5] is often recognised as the first milestone in the history of quantum chemistry. This is the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the phenomenon of the chemical bond. In the following years much progress was accomplished by Edward Teller, Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Linus Pauling, Erich Hückel, Douglas Hartree, Vladimir Aleksandrovich Fock, to cite a few.[ citation needed ]

Still, skepticism remained as to the general power of quantum mechanics applied to complex chemical systems.[ citation needed ] The situation around 1930 is described by Paul Dirac: [6]

The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble. It therefore becomes desirable that approximate practical methods of applying quantum mechanics should be developed, which can lead to an explanation of the main features of complex atomic systems without too much computation.

Hence the quantum mechanical methods developed in the 1930s and 1940s are often referred to as theoretical molecular or atomic physics to underline the fact that they were more the application of quantum mechanics to chemistry and spectroscopy than answers to chemically relevant questions. In 1951, a milestone article in quantum chemistry is the seminal paper of Clemens C. J. Roothaan on Roothaan equations. [7] It opened the avenue to the solution of the self-consistent field equations for small molecules like hydrogen or nitrogen. Those computations were performed with the help of tables of integrals which were computed on the most advanced computers of the time.[ citation needed ]

In the 1940s many physicists turned from molecular or atomic physics to nuclear physics (like J. Robert Oppenheimer or Edward Teller). Glenn T. Seaborg was an American nuclear chemist best known for his work on isolating and identifying transuranium elements (those heavier than uranium). He shared the 1951 Nobel Prize for Chemistry with Edwin Mattison McMillan for their independent discoveries of transuranium elements. Seaborgium was named in his honour, making him one of three people, along Albert Einstein and Yuri Oganessian, for whom a chemical element was named during his lifetime.

Molecular biology and biochemistry

By the mid 20th century, in principle, the integration of physics and chemistry was extensive, with chemical properties explained as the result of the electronic structure of the atom; Linus Pauling's book on The Nature of the Chemical Bond used the principles of quantum mechanics to deduce bond angles in ever-more complicated molecules. However, though some principles deduced from quantum mechanics were able to predict qualitatively some chemical features for biologically relevant molecules, they were, till the end of the 20th century, more a collection of rules, observations, and recipes than rigorous ab initio quantitative methods.[ citation needed ]

Diagrammatic representation of some key structural features of DNA DNA chemical structure.svg
Diagrammatic representation of some key structural features of DNA

This heuristic approach triumphed in 1953 when James Watson and Francis Crick deduced the double helical structure of DNA by constructing models constrained by and informed by the knowledge of the chemistry of the constituent parts and the X-ray diffraction patterns obtained by Rosalind Franklin. [8] This discovery lead to an explosion of research into the biochemistry of life.

In the same year, the Miller–Urey experiment, conducted by Stanley Miller and Harold Urey demonstrated that basic constituents of protein, simple amino acids, could themselves be built up from simpler molecules in a simulation of primordial processes on Earth. Though many questions remain about the true nature of the origin of life, this was the first attempt by chemists to study hypothetical processes in the laboratory under controlled conditions. [9]

In 1983 Kary Mullis devised a method for the in-vitro amplification of DNA, known as the polymerase chain reaction (PCR), which revolutionized the chemical processes used in the laboratory to manipulate it. PCR could be used to synthesize specific pieces of DNA and made possible the sequencing of DNA of organisms, which culminated in the huge human genome project.

An important piece in the double helix puzzle was solved by one of Pauling's students Matthew Meselson and Frank Stahl, the result of their collaboration (Meselson–Stahl experiment) has been called as "the most beautiful experiment in biology".

They used a centrifugation technique that sorted molecules according to differences in weight. Because nitrogen atoms are a component of DNA, they were labelled and therefore tracked in replication in bacteria.

Late 20th century

Buckminsterfullerene, C60 C60 Molecule.svg
Buckminsterfullerene, C60

In 1970, John Pople developed the Gaussian program greatly easing computational chemistry calculations. [10] In 1971, Yves Chauvin offered an explanation of the reaction mechanism of olefin metathesis reactions. [11] In 1975, Karl Barry Sharpless and his group discovered a stereoselective oxidation reactions including Sharpless epoxidation, [12] [13] Sharpless asymmetric dihydroxylation, [14] [15] [16] and Sharpless oxyamination. [17] [18] [19] In 1985, Harold Kroto, Robert Curl and Richard Smalley discovered fullerenes, a class of large carbon molecules superficially resembling the geodesic dome designed by architect R. Buckminster Fuller. [20] In 1991, Sumio Iijima used electron microscopy to discover a type of cylindrical fullerene known as a carbon nanotube, though earlier work had been done in the field as early as 1951. This material is an important component in the field of nanotechnology. [21] In 1994, Robert A. Holton and his group achieved the first total synthesis of Taxol. [22] [23] [24] In 1995, Eric Cornell and Carl Wieman produced the first Bose–Einstein condensate, a substance that displays quantum mechanical properties on the macroscopic scale. [25]

Earth science

In 1912 Alfred Wegener proposed the theory of Continental Drift. [26] This theory suggests that the shapes of continents and matching coastline geology between some continents indicates they were joined in the past and formed a single landmass known as Pangaea; thereafter they separated and drifted like rafts over the ocean floor, currently reaching their present position. Additionally, the theory of continental drift offered a possible explanation as to the formation of mountains; Plate Tectonics built on the theory of continental drift.

Unfortunately, Wegener provided no convincing mechanism for this drift, and his ideas were not generally accepted during his lifetime. Arthur Homes accepted Wegener's theory and provided a mechanism: mantle convection, to cause the continents to move. [27] However, it was not until after the Second World War that new evidence started to accumulate that supported continental drift. There followed a period of 20 extremely exciting years where the Theory of Continental Drift developed from being believed by a few to being the cornerstone of modern Geology. Beginning in 1947 research found new evidence about the ocean floor, and in 1960 Bruce C. Heezen published the concept of mid-ocean ridges. Soon after this, Robert S. Dietz and Harry H. Hess proposed that the oceanic crust forms as the seafloor spreads apart along mid-ocean ridges in seafloor spreading. [28] This was seen as confirmation of mantle convection and so the major stumbling block to the theory was removed. Geophysical evidence suggested lateral motion of continents and that oceanic crust is younger than continental crust. This geophysical evidence also spurred the hypothesis of paleomagnetism, the record of the orientation of the Earth's magnetic field recorded in magnetic minerals. British geophysicist S. K. Runcorn suggested the concept of paleomagnetism from his finding that the continents had moved relative to the Earth's magnetic poles. Tuzo Wilson, who was a promoter of the sea floor spreading hypothesis and continental drift from the very beginning, [29] added the concept of transform faults to the model, completing the classes of fault types necessary to make the mobility of the plates on the globe function. [30] A symposium on continental drift [31] was held at the Royal Society of London in 1965 must be regarded as the official start of the acceptance of plate tectonics by the scientific community. The abstracts from the symposium are issued as Blacket, Bullard, Runcorn;1965.In this symposium, Edward Bullard and co-workers showed with a computer calculation how the continents along both sides of the Atlantic would best fit to close the ocean, which became known as the famous "Bullard's Fit". By the late 1960s the weight of the evidence available saw Continental Drift as the generally accepted theory.

Other theories of the causes of climate change fared no better. The principal advances were in observational paleoclimatology, as scientists in various fields of geology worked out methods to reveal ancient climates. Wilmot H. Bradley found that annual varves of clay laid down in lake beds showed climate cycles. Andrew Ellicott Douglass saw strong indications of climate change in tree rings. Noting that the rings were thinner in dry years, he reported climate effects from solar variations, particularly in connection with the 17th-century dearth of sunspots (the Maunder Minimum) noticed previously by William Herschel and others. Other scientists, however, found good reason to doubt that tree rings could reveal anything beyond random regional variations. The value of tree rings for climate study was not solidly established until the 1960s. [32] [33]

Through the 1930s the most persistent advocate of a solar-climate connection was astrophysicist Charles Greeley Abbot. By the early 1920s, he had concluded that the solar "constant" was misnamed: his observations showed large variations, which he connected with sunspots passing across the face of the Sun. He and a few others pursued the topic into the 1960s, convinced that sunspot variations were a main cause of climate change. Other scientists were skeptical. [32] [33] Nevertheless, attempts to connect the solar cycle with climate cycles were popular in the 1920s and 1930s. Respected scientists announced correlations that they insisted were reliable enough to make predictions. Sooner or later, every prediction failed, and the subject fell into disrepute. [34]

Meanwhile Milutin Milankovitch, building on James Croll's theory, improved the tedious calculations of the varying distances and angles of the Sun's radiation as the Sun and Moon gradually perturbed the Earth's orbit. Some observations of varves (layers seen in the mud covering the bottom of lakes) matched the prediction of a Milankovitch cycle lasting about 21,000 years. However, most geologists dismissed the astronomical theory. For they could not fit Milankovitch's timing to the accepted sequence, which had only four ice ages, all of them much longer than 21,000 years. [35]

In 1938 Guy Stewart Callendar attempted to revive Arrhenius's greenhouse-effect theory. Callendar presented evidence that both temperature and the CO2 level in the atmosphere had been rising over the past half-century, and he argued that newer spectroscopic measurements showed that the gas was effective in absorbing infrared in the atmosphere. Nevertheless, most scientific opinion continued to dispute or ignore the theory. [36]

Charles Keeling, receiving the National Medal of Science from George W. Bush, in 2001 Charles David Keeling 2001.jpg
Charles Keeling, receiving the National Medal of Science from George W. Bush, in 2001

Another clue to the nature of climate change came in the mid-1960s from analysis of deep-sea cores by Cesare Emiliani and analysis of ancient corals by Wallace Broecker and collaborators. Rather than four long ice ages, they found a large number of shorter ones in a regular sequence. It appeared that the timing of ice ages was set by the small orbital shifts of the Milankovitch cycles. While the matter remained controversial, some began to suggest that the climate system is sensitive to small changes and can readily be flipped from a stable state into a different one. [35]

Scientists meanwhile began using computers to develop more sophisticated versions of Arrhenius's calculations. In 1967, taking advantage of the ability of digital computers to integrate absorption curves numerically, Syukuro Manabe and Richard Wetherald made the first detailed calculation of the greenhouse effect incorporating convection (the "Manabe-Wetherald one-dimensional radiative-convective model"). [37] [38] They found that, in the absence of unknown feedbacks such as changes in clouds, a doubling of carbon dioxide from the current level would result in approximately 2 °C increase in global temperature.

By the 1960s, aerosol pollution ("smog") had become a serious local problem in many cities, and some scientists began to consider whether the cooling effect of particulate pollution could affect global temperatures. Scientists were unsure whether the cooling effect of particulate pollution or warming effect of greenhouse gas emissions would predominate, but regardless, began to suspect that human emissions could be disruptive to climate in the 21st century if not sooner. In his 1968 book The Population Bomb , Paul R. Ehrlich wrote, "the greenhouse effect is being enhanced now by the greatly increased level of carbon dioxide... [this] is being countered by low-level clouds generated by contrails, dust, and other contaminants... At the moment we cannot predict what the overall climatic results will be of our using the atmosphere as a garbage dump." [39]

A 1968 study by the Stanford Research Institute for the American Petroleum Institute noted: [40]

If the earth's temperature increases significantly, a number of events might be expected to occur, including the melting of the Antarctic ice cap, a rise in sea levels, warming of the oceans, and an increase in photosynthesis. [..] Revelle makes the point that man is now engaged in a vast geophysical experiment with his environment, the earth. Significant temperature changes are almost certain to occur by the year 2000 and these could bring about climatic changes.

In 1969, NATO was the first candidate to deal with climate change on an international level. It was planned then to establish a hub of research and initiatives of the organization in the civil area, dealing with environmental topics [41] as acid rain and the greenhouse effect. The suggestion of US President Richard Nixon was not very successful with the administration of German Chancellor Kurt Georg Kiesinger. But the topics and the preparation work done on the NATO proposal by the German authorities gained international momentum, (see e.g. the Stockholm United Nations Conference on the Human Environment 1970) as the government of Willy Brandt started to apply them on the civil sphere instead. [41] [ clarification needed ]

Also in 1969, Mikhail Budyko published a theory on the ice-albedo feedback, a foundational element of what is today known as Arctic amplification. [42] The same year a similar model was published by William D. Sellers. [43] Both studies attracted significant attention, since they hinted at the possibility for a runaway positive feedback within the global climate system. [44]

In the early 1970s, evidence that aerosols were increasing worldwide encouraged Reid Bryson and some others to warn of the possibility of severe cooling. Meanwhile, the new evidence that the timing of ice ages was set by predictable orbital cycles suggested that the climate would gradually cool, over thousands of years. For the century ahead, however, a survey of the scientific literature from 1965 to 1979 found 7 articles predicting cooling and 44 predicting warming (many other articles on climate made no prediction); the warming articles were cited much more often in subsequent scientific literature. [45] Several scientific panels from this time period concluded that more research was needed to determine whether warming or cooling was likely, indicating that the trend in the scientific literature had not yet become a consensus. [46] [47] [48]

John Sawyer published the study Man-made Carbon Dioxide and the “Greenhouse” Effect in 1972. [49] He summarized the knowledge of the science at the time, the anthropogenic attribution of the carbon dioxide greenhouse gas, distribution and exponential rise, findings which still hold today. Additionally he accurately predicted the rate of global warming for the period between 1972 and 2000. [50] [51]

The increase of 25% CO2 expected by the end of the century therefore corresponds to an increase of 0.6°C in the world temperature – an amount somewhat greater than the climatic variation of recent centuries. – John Sawyer, 1972

The mainstream news media at the time exaggerated the warnings of the minority who expected imminent cooling. For example, in 1975, Newsweek magazine published a story that warned of "ominous signs that the Earth's weather patterns have begun to change." [52] The article continued by stating that evidence of global cooling was so strong that meteorologists were having "a hard time keeping up with it." [52] On 23 October 2006, Newsweek issued an update stating that it had been "spectacularly wrong about the near-term future". [53]

In the first two "Reports for the Club of Rome" in 1972 [54] and 1974, [55] the anthropogenic climate changes by CO2 increase as well as by waste heat were mentioned. About the latter John Holdren wrote in a study [56] cited in the 1st report, “… that global thermal pollution is hardly our most immediate environmental threat. It could prove to be the most inexorable, however, if we are fortunate enough to evade all the rest.” Simple global-scale estimates [57] that recently have been actualized [58] and confirmed by more refined model calculations [59] [60] show noticeable contributions from waste heat to global warming after the year 2100, if its growth rates are not strongly reduced (below the averaged 2% p.a. which occurred since 1973).

Evidence for warming accumulated. By 1975, Manabe and Wetherald had developed a three-dimensional Global climate model that gave a roughly accurate representation of the current climate. Doubling CO2 in the model's atmosphere gave a roughly 2 °C rise in global temperature. [61] Several other kinds of computer models gave similar results: it was impossible to make a model that gave something resembling the actual climate and not have the temperature rise when the CO2 concentration was increased.

The 1979 World Climate Conference (12 to 23 February) of the World Meteorological Organization concluded "it appears plausible that an increased amount of carbon dioxide in the atmosphere can contribute to a gradual warming of the lower atmosphere, especially at higher latitudes....It is possible that some effects on a regional and global scale may be detectable before the end of this century and become significant before the middle of the next century." [62]

In July 1979 the United States National Research Council published a report, [63] concluding (in part):

When it is assumed that the CO2 content of the atmosphere is doubled and statistical thermal equilibrium is achieved, the more realistic of the modeling efforts predict a global surface warming of between 2°C and 3.5°C, with greater increases at high latitudes. … we have tried but have been unable to find any overlooked or underestimated physical effects that could reduce the currently estimated global warmings due to a doubling of atmospheric CO2 to negligible proportions or reverse them altogether.

James Hansen during his 1988 testimony to Congress, which alerted the public to the dangers of global warming James Hansen Crop1.png
James Hansen during his 1988 testimony to Congress, which alerted the public to the dangers of global warming

By the early 1980s, the slight cooling trend from 1945 to 1975 had stopped. Aerosol pollution had decreased in many areas due to environmental legislation and changes in fuel use, and it became clear that the cooling effect from aerosols was not going to increase substantially while carbon dioxide levels were progressively increasing.

Hansen and others published the 1981 study Climate impact of increasing atmospheric carbon dioxide, and noted:

It is shown that the anthropogenic carbon dioxide warming should emerge from the noise level of natural climate variability by the end of the century, and there is a high probability of warming in the 1980s. Potential effects on climate in the 21st century include the creation of drought-prone regions in North America and central Asia as part of a shifting of climatic zones, erosion of the West Antarctic ice sheet with a consequent worldwide rise in sea level, and opening of the fabled Northwest Passage. [64]

In 1982, Greenland ice cores drilled by Hans Oeschger, Willi Dansgaard, and collaborators revealed dramatic temperature oscillations in the space of a century in the distant past. [65] The most prominent of the changes in their record corresponded to the violent Younger Dryas climate oscillation seen in shifts in types of pollen in lake beds all over Europe. Evidently drastic climate changes were possible within a human lifetime.

In 1985 a joint UNEP/WMO/ICSU Conference on the "Assessment of the Role of Carbon Dioxide and Other Greenhouse Gases in Climate Variations and Associated Impacts" concluded that greenhouse gases "are expected" to cause significant warming in the next century and that some warming is inevitable. [66]

Meanwhile, ice cores drilled by a Franco-Soviet team at the Vostok Station in Antarctica showed that CO2 and temperature had gone up and down together in wide swings through past ice ages. This confirmed the CO2-temperature relationship in a manner entirely independent of computer climate models, strongly reinforcing the emerging scientific consensus. The findings also pointed to powerful biological and geochemical feedbacks. [67]

In June 1988, James E. Hansen made one of the first assessments that human-caused warming had already measurably affected global climate. [68] Shortly after, a "World Conference on the Changing Atmosphere: Implications for Global Security" gathered hundreds of scientists and others in Toronto. They concluded that the changes in the atmosphere due to human pollution "represent a major threat to international security and are already having harmful consequences over many parts of the globe," and declared that by 2005 the world would be well-advised to push its emissions some 20% below the 1988 level. [69]

The 1980s saw important breakthroughs with regard to global environmental challenges. Ozone depletion was mitigated by the Vienna Convention (1985) and the Montreal Protocol (1987). Acid rain was mainly regulated on national and regional levels.

Colors indicate temperature anomalies (NASA/NOAA; 20 January 2016). [70] In 1988 the WMO established the Intergovernmental Panel on Climate Change with the support of the UNEP. The IPCC continues its work through the present day, and issues a series of Assessment Reports and supplemental reports that describe the state of scientific understanding at the time each report is prepared. Scientific developments during this period are summarized about once every five to six years in the IPCC Assessment Reports which were published in 1990 (First Assessment Report), 1995 (Second Assessment Report), 2001 (Third Assessment Report), 2007 (Fourth Assessment Report), and 2013/2014 (Fifth Assessment Report). [71]

Since the 1990s, research on climate change has expanded and grown, linking many fields such as atmospheric sciences, numerical modeling, behavioral sciences, geology and economics, or security.

Engineering and technology

First flight of the Wright Flyer I, December 17, 1903, Orville piloting, Wilbur running at wingtip. First flight2.jpg
First flight of the Wright Flyer I, December 17, 1903, Orville piloting, Wilbur running at wingtip.

One of the prominent traits of the 20th century was the dramatic growth of technology. Organized research and practice of science led to advancement in the fields of communication, engineering, travel, medicine, and war.

Mathematics

The 20th century saw mathematics become a major profession. As in most areas of study, the explosion of knowledge in the scientific age has led to specialization: by the end of the century there were hundreds of specialized areas in mathematics and the Mathematics Subject Classification was dozens of pages long. [72] Every year, thousands of new Ph.D.s in mathematics were awarded, and jobs were available in both teaching and industry. More and more mathematical journals were published and, by the end of the century, the development of the World Wide Web led to online publishing. Mathematical collaborations of unprecedented size and scope took place. An example is the classification of finite simple groups (also called the "enormous theorem"), whose proof between 1955 and 1983 required 500-odd journal articles by about 100 authors, and filling tens of thousands of pages.

In a 1900 speech to the International Congress of Mathematicians, David Hilbert set out a list of 23 unsolved problems in mathematics. These problems, spanning many areas of mathematics, formed a central focus for much of 20th-century mathematics. Today, 10 have been solved, 7 are partially solved, and 2 are still open. The remaining 4 are too loosely formulated to be stated as solved or not.

In 1929 and 1930, it was proved the truth or falsity of all statements formulated about the natural numbers plus one of addition and multiplication, was decidable, i.e. could be determined by some algorithm. In 1931, Kurt Gödel found that this was not the case for the natural numbers plus both addition and multiplication; this system, known as Peano arithmetic, was in fact incompletable. (Peano arithmetic is adequate for a good deal of number theory, including the notion of prime number.) A consequence of Gödel's two incompleteness theorems is that in any mathematical system that includes Peano arithmetic (including all of analysis and geometry), truth necessarily outruns proof, i.e. there are true statements that cannot be proved within the system. Hence mathematics cannot be reduced to mathematical logic, and David Hilbert's dream of making all of mathematics complete and consistent needed to be reformulated.

In 1963, Paul Cohen proved that the continuum hypothesis is independent of (could neither be proved nor disproved from) the standard axioms of set theory. In 1976, Wolfgang Haken and Kenneth Appel used a computer to prove the four color theorem. Andrew Wiles, building on the work of others, proved Fermat's Last Theorem in 1995. In 1998 Thomas Callister Hales proved the Kepler conjecture.

Newtonian (red) vs. Einsteinian orbit (blue) of a lone planet orbiting a star, with relativistic precession of apsides Relativistic precession.svg
Newtonian (red) vs. Einsteinian orbit (blue) of a lone planet orbiting a star, with relativistic precession of apsides

Differential geometry came into its own when Albert Einstein used it in general relativity. Entirely new areas of mathematics such as mathematical logic, topology, and John von Neumann's game theory changed the kinds of questions that could be answered by mathematical methods. All kinds of structures were abstracted using axioms and given names like metric spaces, topological spaces etc. As mathematicians do, the concept of an abstract structure was itself abstracted and led to category theory. Grothendieck and Serre recast algebraic geometry using sheaf theory. Large advances were made in the qualitative study of dynamical systems that Poincaré had begun in the 1890s. Measure theory was developed in the late 19th and early 20th centuries. Applications of measures include the Lebesgue integral, Kolmogorov's axiomatisation of probability theory, and ergodic theory. Knot theory greatly expanded. Quantum mechanics led to the development of functional analysis. Other new areas include Laurent Schwartz's distribution theory, fixed point theory, singularity theory and René Thom's catastrophe theory, model theory, and Mandelbrot's fractals. Lie theory with its Lie groups and Lie algebras became one of the major areas of study.

Non-standard analysis, introduced by Abraham Robinson, rehabilitated the infinitesimal approach to calculus, which had fallen into disrepute in favour of the theory of limits, by extending the field of real numbers to the Hyperreal numbers which include infinitesimal and infinite quantities. An even larger number system, the surreal numbers were discovered by John Horton Conway in connection with combinatorial games.

The development and continual improvement of computers, at first mechanical analog machines and then digital electronic machines, allowed industry to deal with larger and larger amounts of data to facilitate mass production and distribution and communication, and new areas of mathematics were developed to deal with this: Alan Turing's computability theory; complexity theory; Derrick Henry Lehmer's use of ENIAC to further number theory and the Lucas-Lehmer test; Rózsa Péter's recursive function theory; Claude Shannon's information theory; signal processing; data analysis; optimization and other areas of operations research. In the preceding centuries much mathematical focus was on calculus and continuous functions, but the rise of computing and communication networks led to an increasing importance of discrete concepts and the expansion of combinatorics including graph theory. The speed and data processing abilities of computers also enabled the handling of mathematical problems that were too time-consuming to deal with by pencil and paper calculations, leading to areas such as numerical analysis and symbolic computation. Some of the most important methods and algorithms of the 20th century are: the simplex algorithm, the fast Fourier transform, error-correcting codes, the Kalman filter from control theory and the RSA algorithm of public-key cryptography.

Physics

Quantum mechanics

Quantum mechanics in the 1920s
Broglie Big.jpg Pauli.jpg
Erwin Schrodinger (1933).jpg Werner Heisenberg cropped.jpg
From left to right, top row: Louis de Broglie (1892–1987) and Wolfgang Pauli (1900–58); second row: Erwin Schrödinger (1887–1961) and Werner Heisenberg (1901–76)

In 1924, French quantum physicist Louis de Broglie published his thesis, in which he introduced a revolutionary theory of electron waves based on wave–particle duality. In his time, the wave and particle interpretations of light and matter were seen as being at odds with one another, but de Broglie suggested that these seemingly different characteristics were instead the same behavior observed from different perspectives — that particles can behave like waves, and waves (radiation) can behave like particles. Broglie's proposal offered an explanation of the restricted motion of electrons within the atom. The first publications of Broglie's idea of "matter waves" had drawn little attention from other physicists, but a copy of his doctoral thesis chanced to reach Einstein, whose response was enthusiastic. Einstein stressed the importance of Broglie's work both explicitly and by building further on it.

In 1925, Austrian-born physicist Wolfgang Pauli developed the Pauli exclusion principle, which states that no two electrons around a single nucleus in an atom can occupy the same quantum state simultaneously, as described by four quantum numbers. Pauli made major contributions to quantum mechanics and quantum field theory – he was awarded the 1945 Nobel Prize for Physics for his discovery of the Pauli exclusion principle – as well as solid-state physics, and he successfully hypothesized the existence of the neutrino. In addition to his original work, he wrote masterful syntheses of several areas of physical theory that are considered classics of scientific literature.

In 1926 at the age of 39, Austrian theoretical physicist Erwin Schrödinger produced the papers that gave the foundations of quantum wave mechanics. In those papers he described his partial differential equation that is the basic equation of quantum mechanics and bears the same relation to the mechanics of the atom as Newton's equations of motion bear to planetary astronomy. Adopting a proposal made by Louis de Broglie in 1924 that particles of matter have a dual nature and in some situations act like waves, Schrödinger introduced a theory describing the behaviour of such a system by a wave equation that is now known as the Schrödinger equation. The solutions to Schrödinger's equation, unlike the solutions to Newton's equations, are wave functions that can only be related to the probable occurrence of physical events. The readily visualized sequence of events of the planetary orbits of Newton is, in quantum mechanics, replaced by the more abstract notion of probability. (This aspect of the quantum theory made Schrödinger and several other physicists profoundly unhappy, and he devoted much of his later life to formulating philosophical objections to the generally accepted interpretation of the theory that he had done so much to create.)

German theoretical physicist Werner Heisenberg was one of the key creators of quantum mechanics. In 1925, Heisenberg discovered a way to formulate quantum mechanics in terms of matrices. For that discovery, he was awarded the Nobel Prize for Physics for 1932. In 1927 he published his uncertainty principle, upon which he built his philosophy and for which he is best known. Heisenberg was able to demonstrate that if you were studying an electron in an atom you could say where it was (the electron's location) or where it was going (the electron's velocity), but it was impossible to express both at the same time. He also made important contributions to the theories of the hydrodynamics of turbulent flows, the atomic nucleus, ferromagnetism, cosmic rays, and subatomic particles, and he was instrumental in planning the first West German nuclear reactor at Karlsruhe, together with a research reactor in Munich, in 1957. Considerable controversy surrounds his work on atomic research during World War II.

Social sciences

Related Research Articles

<span class="mw-page-title-main">Atom</span> Smallest unit of a chemical element

An atom is a particle that consists of a nucleus of protons and neutrons surrounded by a cloud of electrons. The atom is the basic particle of the chemical elements, and the chemical elements are distinguished from each other by the number of protons that are in their atoms. For example, any atom that contains 11 protons is sodium, and any atom that contains 29 protons is copper. The number of neutrons defines the isotope of the element.

<span class="mw-page-title-main">Atomic theory</span> Model for understanding elemental particles

Atomic theory is the scientific theory that matter is composed of particles called atoms. The concept that matter is composed of discrete particles is an ancient idea, but gained scientific credence in the 18th and 19th centuries when scientists found it could explain the behaviors of gases and how chemical elements reacted with each other. By the end of the 19th century, atomic theory had gained widespread acceptance in the scientific community.

Chemistry is the scientific study of the properties and behavior of matter. It is a physical science under natural sciences that covers the elements that make up matter to the compounds made of atoms, molecules and ions: their composition, structure, properties, behavior and the changes they undergo during a reaction with other substances. Chemistry also addresses the nature of chemical bonds in chemical compounds.

<span class="mw-page-title-main">Chemical bond</span> Lasting attraction between atoms that enables the formation of chemical compounds

A chemical bond is a lasting attraction between atoms or ions that enables the formation of molecules, crystals, and other structures. The bond may result from the electrostatic force between oppositely charged ions as in ionic bonds, or through the sharing of electrons as in covalent bonds. The strength of chemical bonds varies considerably; there are "strong bonds" or "primary bonds" such as covalent, ionic and metallic bonds, and "weak bonds" or "secondary bonds" such as dipole–dipole interactions, the London dispersion force, and hydrogen bonding.

Computational chemistry is a branch of chemistry that uses computer simulation to assist in solving chemical problems. It uses methods of theoretical chemistry, incorporated into computer programs, to calculate the structures and properties of molecules, groups of molecules, and solids. It is essential because, apart from relatively recent results concerning the hydrogen molecular ion, the quantum many-body problem cannot be solved analytically, much less in closed form. While computational results normally complement the information obtained by chemical experiments, it can in some cases predict hitherto unobserved chemical phenomena. It is widely used in the design of new drugs and materials.

<span class="mw-page-title-main">Covalent bond</span> Chemical bond that involves the sharing of electron pairs between atoms

A covalent bond is a chemical bond that involves the sharing of electrons to form electron pairs between atoms. These electron pairs are known as shared pairs or bonding pairs. The stable balance of attractive and repulsive forces between atoms, when they share electrons, is known as covalent bonding. For many molecules, the sharing of electrons allows each atom to attain the equivalent of a full valence shell, corresponding to a stable electronic configuration. In organic chemistry, covalent bonding is much more common than ionic bonding.

Quantum chemistry, also called molecular quantum mechanics, is a branch of physical chemistry focused on the application of quantum mechanics to chemical systems, particularly towards the quantum-mechanical calculation of electronic contributions to physical and chemical properties of molecules, materials, and solutions at the atomic level. These calculations include systematically applied approximations intended to make calculations computationally feasible while still capturing as much information about important contributions to the computed wave functions as well as to observable properties such as structures, spectra, and thermodynamic properties. Quantum chemistry is also concerned with the computation of quantum effects on molecular dynamics and chemical kinetics.

<span class="mw-page-title-main">Theoretical chemistry</span> Branch of chemistry

Theoretical chemistry is the branch of chemistry which develops theoretical generalizations that are part of the theoretical arsenal of modern chemistry: for example, the concepts of chemical bonding, chemical reaction, valence, the surface of potential energy, molecular orbitals, orbital interactions, and molecule activation.

<span class="mw-page-title-main">Svante Arrhenius</span> Swedish scientist (1859–1927)

Svante August Arrhenius was a Swedish scientist. Originally a physicist, but often referred to as a chemist, Arrhenius was one of the founders of the science of physical chemistry. He received the Nobel Prize for Chemistry in 1903, becoming the first Swedish Nobel laureate. In 1905, he became the director of the Nobel Institute, where he remained until his death.

<span class="mw-page-title-main">Octet rule</span> Chemical rule of thumb

The octet rule is a chemical rule of thumb that reflects the theory that main-group elements tend to bond in such a way that each atom has eight electrons in its valence shell, giving it the same electronic configuration as a noble gas. The rule is especially applicable to carbon, nitrogen, oxygen, and the halogens; although more generally the rule is applicable for the s-block and p-block of the periodic table. Other rules exist for other elements, such as the duplet rule for hydrogen and helium, or the 18-electron rule for transition metals.

A hydrogen ion is created when a hydrogen atom loses an electron. A positively charged hydrogen ion (or proton) can readily combine with other particles and therefore is only seen isolated when it is in a gaseous state or a nearly particle-free space. Due to its extremely high charge density of approximately 2×1010 times that of a sodium ion, the bare hydrogen ion cannot exist freely in solution as it readily hydrates, i.e., bonds quickly. The hydrogen ion is recommended by IUPAC as a general term for all ions of hydrogen and its isotopes. Depending on the charge of the ion, two different classes can be distinguished: positively charged ions and negatively charged ions.

In chemistry, molecular orbital theory is a method for describing the electronic structure of molecules using quantum mechanics. It was proposed early in the 20th century.

In chemistry, orbital hybridisation is the concept of mixing atomic orbitals to form new hybrid orbitals suitable for the pairing of electrons to form chemical bonds in valence bond theory. For example, in a carbon atom which forms four single bonds the valence-shell s orbital combines with three valence-shell p orbitals to form four equivalent sp3 mixtures in a tetrahedral arrangement around the carbon to bond to four different atoms. Hybrid orbitals are useful in the explanation of molecular geometry and atomic bonding properties and are symmetrically disposed in space. Usually hybrid orbitals are formed by mixing atomic orbitals of comparable energies.

<span class="mw-page-title-main">History of chemistry</span> Historical development of chemistry

The history of chemistry represents a time span from ancient history to the present. By 1000 BC, civilizations used technologies that would eventually form the basis of the various branches of chemistry. Examples include the discovery of fire, extracting metals from ores, making pottery and glazes, fermenting beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into soap, making glass, and making alloys like bronze.

Quantum mechanics is the study of matter and its interactions with energy on the scale of atomic and subatomic particles. By contrast, classical physics explains matter and energy only on a scale familiar to human experience, including the behavior of astronomical bodies such as the moon. Classical physics is still used in much of modern science and technology. However, towards the end of the 19th century, scientists discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain. The desire to resolve inconsistencies between observed phenomena and classical theory led to a revolution in physics, a shift in the original scientific paradigm: the development of quantum mechanics.

<span class="mw-page-title-main">Guy Stewart Callendar</span>

Guy Stewart Callendar was an English steam engineer and inventor. His main contribution to human knowledge was developing the theory that linked rising carbon dioxide concentrations in the atmosphere to global temperature. In 1938, he was the first to show that the land temperature of Earth had risen over the previous 50 years. This theory, earlier proposed by Svante Arrhenius, has been called the Callendar effect. Callendar thought this warming would be beneficial, delaying a "return of the deadly glaciers."

The history of quantum mechanics is a fundamental part of the history of modern physics. The major chapters of this history begin with the emergence of quantum ideas to explain individual phenomena -- blackbody radiation, the photoelectric effect, solar emission spectra -- an era called the Old or Older quantum theories. The invention of wave mechanics by Schrodinger and expanded by many others triggers the "modern" era beginning around 1925. Dirac's relativistic quantum theory work lead him to explore quantum theories of radiation, culminating in quantum electrodynamics, the first quantum field theory. The history of quantum mechanics continues in the history of quantum field theory. The history of quantum chemistry, theoretical basis of chemical structure, reactivity, and bonding, interlaces with the events discussed in this article.

<span class="mw-page-title-main">Timeline of chemistry</span> List of events in the history of chemistry

This timeline of chemistry lists important works, discoveries, ideas, inventions, and experiments that significantly changed humanity's understanding of the modern science known as chemistry, defined as the scientific study of the composition of matter and of its interactions.

<span class="mw-page-title-main">Greenhouse gas</span> Gas in an atmosphere that absorbs and emits radiation at thermal infrared wavelengths

Greenhouse gases are those gases in the atmosphere that raise the surface temperature of planets such as the Earth. What distinguishes them from other gases is that they absorb the wavelengths of radiation that a planet emits, resulting in the greenhouse effect. The Earth is warmed by sunlight, causing its surface to radiate heat, which is then mostly absorbed by water vapor (H2O), carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), and ozone (O3). Without greenhouse gases, the average temperature of Earth's surface would be about −18 °C (0 °F), rather than the present average of 15 °C (59 °F).

<span class="mw-page-title-main">Branches of physics</span> Overview of the branches of physics

Physics is a scientific discipline that seeks to construct and experimentally test theories of the physical universe. These theories vary in their scope and can be organized into several distinct branches, which are outlined in this article.

References

  1. Agar, Jon (2012). Science in the Twentieth Century and Beyond. Cambridge: Polity Press. ISBN   978-0-7456-3469-2.
  2. Thomson, Sir William (1862). "On the Age of the Sun's Heat". Macmillan's Magazine. 5: 288–293.
  3. 1 2 3 "The Nobel Prize in Physiology or Medicine 1962". NobelPrize.org. Nobel Media AB. Retrieved November 5, 2011.
  4. 1 2 3 4 "James Watson, Francis Crick, Maurice Wilkins, and Rosalind Franklin". Science History Institute. June 2016. Archived from the original on 21 March 2018. Retrieved 20 March 2018.
  5. W. Heitler and F. London, Wechselwirkung neutraler Atome und Homöopolare Bindung nach der Quantenmechanik, Z. Physik, 44, 455 (1927).
  6. P.A.M. Dirac, Quantum Mechanics of Many-Electron Systems, Proc. R. Soc. London, A 123, 714 (1929).
  7. C.C.J. Roothaan, A Study of Two-Center Integrals Useful in Calculations on Molecular Structure, J. Chem. Phys., 19, 1445 (1951).
  8. Watson, J. and Crick, F., "Molecular Structure of Nucleic Acids" Nature, April 25, 1953, p 737–8
  9. The Miller Urey Experiment – Windows to the Universe
  10. W. J. Hehre, W. A. Lathan, R. Ditchfield, M. D. Newton, and J. A. Pople, Gaussian 70 (Quantum Chemistry Program Exchange, Program No. 237, 1970).
  11. Catalyse de transformation des oléfines par les complexes du tungstène. II. Télomérisation des oléfines cycliques en présence d'oléfines acycliques Die Makromolekulare Chemie Volume 141, Issue 1, Date: 9 February 1971, Pages: 161–176 Par Jean-Louis Hérisson, Yves Chauvin doi : 10.1002/macp.1971.021410112
  12. Katsuki, T.; Sharpless, K. B. J. Am. Chem. Soc. 1980, 102, 5974. ( doi : 10.1021/ja00538a077)
  13. Hill, J. G.; Sharpless, K. B.; Exon, C. M.; Regenye, R. Org. Synth. , Coll. Vol. 7, p.461 (1990); Vol. 63, p.66 (1985). (Article)
  14. Jacobsen, E. N.; Marko, I.; Mungall, W. S.; Schroeder, G.; Sharpless, K. B. J. Am. Chem. Soc. 1988, 110, 1968. ( doi : 10.1021/ja00214a053)
  15. Kolb, H. C.; Van Nieuwenhze, M. S.; Sharpless, K. B. Chem. Rev. 1994, 94, 2483–2547. (Review) ( doi : 10.1021/cr00032a009)
  16. Gonzalez, J.; Aurigemma, C.; Truesdale, L. Org. Synth. , Coll. Vol. 10, p.603 (2004); Vol. 79, p.93 (2002). (Article Archived 2010-08-24 at the Wayback Machine )
  17. Sharpless, K. B.; Patrick, D. W.; Truesdale, L. K.; Biller, S. A. J. Am. Chem. Soc. 1975, 97, 2305. ( doi : 10.1021/ja00841a071)
  18. Herranz, E.; Biller, S. A.; Sharpless, K. B. J. Am. Chem. Soc. 1978, 100, 3596–3598. ( doi : 10.1021/ja00479a051)
  19. Herranz, E.; Sharpless, K. B. Org. Synth. , Coll. Vol. 7, p.375 (1990); Vol. 61, p.85 (1983). (Article Archived 2012-10-20 at the Wayback Machine )
  20. "The Nobel Prize in Chemistry 1996". Nobelprize.org. The Nobel Foundation. Retrieved 2007-02-28.
  21. "Benjamin Franklin Medal awarded to Dr. Sumio Iijima, Director of the Research Center for Advanced Carbon Materials, AIST". National Institute of Advanced Industrial Science and Technology. 2002. Archived from the original on 2007-04-04. Retrieved 2007-03-27.
  22. First total synthesis of taxol 1. Functionalization of the B ring Robert A. Holton, Carmen Somoza, Hyeong Baik Kim, Feng Liang, Ronald J. Biediger, P. Douglas Boatman, Mitsuru Shindo, Chase C. Smith, Soekchan Kim, et al.; J. Am. Chem. Soc.; 1994; 116(4); 1597–1598. DOI Abstract
  23. First total synthesis of taxol. 2. Completion of the C and D rings Robert A. Holton, Hyeong Baik Kim, Carmen Somoza, Feng Liang, Ronald J. Biediger, P. Douglas Boatman, Mitsuru Shindo, Chase C. Smith, Soekchan Kim, and et al. J. Am. Chem. Soc.; 1994; 116(4) pp 1599–1600 DOI Abstract
  24. A synthesis of taxusin Robert A. Holton, R. R. Juo, Hyeong B. Kim, Andrew D. Williams, Shinya Harusawa, Richard E. Lowenthal, Sadamu Yogai J. Am. Chem. Soc.; 1988; 110(19); 6558–6560. Abstract
  25. "Cornell and Wieman Share 2001 Nobel Prize in Physics". NIST News Release. National Institute of Standards and Technology. 2001. Archived from the original on 2007-06-10. Retrieved 2007-03-27.
  26. Wegener, Alfred (1912). "Die Herausbildung der Grossformen der Erdrinde (Kontinente und Ozeane), auf geophysikalischer Grundlage" (PDF)". Petermanns Geographische Mitteilungen. 63: 185–95, 253–56, 305–09.
  27. Holmes, Arthur (1931). "Radioactivity and Earth Movements" (PDF). Transactions of the Geological Society of Glasgow. Geological Society of Glasgow. 18 (3): 559–606. doi:10.1144/transglas.18.3.559. S2CID   122872384.
  28. Hess, H. H. (1 November 1962). "History of Ocean Basins" (PDF). In A. E. J. Engel; Harold L. James; B. F. Leonard (eds.). Petrologic studies: a volume in honor of A. F. Buddington. Boulder, CO: Geological Society of America. pp. 599–620.
  29. Wilson, J.T> (1963). "Hypothesis on the Earth's behaviour". Nature. 198 (4884): 849–65. Bibcode:1963Natur.198..849H. doi:10.1038/198849a0. S2CID   4209203.
  30. Wilson, J. Tuzo (1965). "A new class of faults and their bearing on continental drift". Nature. 207 (4995): 343–47. Bibcode:1965Natur.207..343W. doi:10.1038/207343a0. S2CID   4294401.
  31. Blacket, P.M.S.; Bullard, E.; Runcorn, S.K., eds. (1965). "A Symposium on Continental Drift, held on 28 October 1965". Philosophical Transactions of the Royal Society A. Royal Society of London. 258 (1088).
  32. 1 2 Spencer Weart (2011). "Changing Sun, Changing Climate". The Discovery of Global Warming.
  33. 1 2 Hufbauer, K. (1991). Exploring the Sun: Solar Science since Galileo. Baltimore, MD: Johns Hopkins University Press.
  34. Lamb, Hubert H. (1997). Through All the Changing Scenes of Life: A Meteorologist's Tale. Norfolk, UK: Taverner. pp. 192–193. ISBN   1-901470-02-4.
  35. 1 2 Spencer Weart (2011). "Past Climate Cycles: Ice Age Speculations". The Discovery of Global Warming.
  36. Fleming, James R. (2007). The Callendar Effect. The Life and Work of Guy Stewart Callendar (1898–1964), the Scientist Who Established the Carbon Dioxide Theory of Climate Change. Boston, MA: American Meteorological Society. ISBN   978-1878220769.
  37. Spencer Weart (2011). "General Circulation Models of Climate". The Discovery of Global Warming.
  38. Manabe S.; Wetherald R. T. (1967). "Thermal Equilibrium of the Atmosphere with a Given Distribution of Relative Humidity". Journal of the Atmospheric Sciences. 24 (3): 241–259. Bibcode:1967JAtS...24..241M. doi: 10.1175/1520-0469(1967)024<0241:teotaw>2.0.co;2 .
  39. Ehrlich, Paul R. (1968). The Population Bomb. San Francisco: Sierra Club. p. 52.
  40. E. Robinson; R.C. Robbins (1968). "Smoke & Fumes, Sources, Abundance & Fate of Atmospheric Pollutants". Stanford Research Institute.
  41. 1 2 Die Frühgeschichte der globalen Umweltkrise und die Formierung der deutschen Umweltpolitik(1950–1973) (Early history of the environmental crisis and the setup of German environmental policy 1950–1973), Kai F. Hünemörder, Franz Steiner Verlag, 2004 ISBN   3-515-08188-7
  42. "Ice in Action: Sea ice at the North Pole has something to say about climate change". YaleScientific. 2016.
  43. William D. Sellers (1969). "A Global Climatic Model Based on the Energy Balance of the Earth-Atmosphere System". Journal of Applied Meteorology. 8 (3): 392–400. Bibcode:1969JApMe...8..392S. doi: 10.1175/1520-0450(1969)008<0392:AGCMBO>2.0.CO;2 .
  44. Jonathan D. Oldfield (2016). "Mikhail Budyko's (1920–2001) contributions to Global Climate Science: from heat balances to climate change and global ecology". Advanced Review. 7 (5): 682–692. doi: 10.1002/wcc.412 .
  45. Peterson, T.C.; W.M. Connolley; J. Fleck (2008). "The Myth of the 1970s Global Cooling Scientific Consensus" (PDF). Bulletin of the American Meteorological Society . 89 (9): 1325–1337. Bibcode:2008BAMS...89.1325P. doi:10.1175/2008BAMS2370.1. S2CID   123635044.
  46. Science and the Challenges Ahead. Report of the National Science Board. Washington, D.C. : National Science Board, National Science Foundation. 1974.
  47. W M Connolley. "The 1975 US National Academy of Sciences/National Research Council Report" . Retrieved 28 June 2009.
  48. Reid A. Bryson:A Reconciliation of several Theories of Climate Change, in: John P. Holdren (Ed.): Global Ecology. Readings toward a Rational Strategy for Man, New York etc. 1971, S. 78–84
  49. J. S. Sawyer (1 September 1972). "Man-made Carbon Dioxide and the "Greenhouse" Effect". Nature. 239 (5366): 23–26. Bibcode:1972Natur.239...23S. doi:10.1038/239023a0. S2CID   4180899.
  50. Neville Nicholls (30 August 2007). "Climate: Sawyer predicted rate of warming in 1972". Nature. 448 (7157): 992. Bibcode:2007Natur.448..992N. doi: 10.1038/448992c . PMID   17728736.
  51. Dana Andrew Nuccitelli (3 March 2015). Climatology versus Pseudoscience: Exposing the Failed Predictions of Global Warming Skeptics. Nature. pp. 22–25. ISBN   9781440832024.
  52. 1 2 Peter Gwynne (1975). "The Cooling World" (PDF). Archived from the original (PDF) on 2013-04-20. Retrieved 2020-06-17.
  53. Jerry Adler (23 October 2006). "Climate Change: Prediction Perils". Newsweek.
  54. Meadows, D., et al., The Limits to Growth. New York 1972.
  55. Mesarovic, M., Pestel, E., Mankind at the Turning Point. New York 1974.
  56. John P. Holdren: "Global Thermal Pollution", in: John P. Holdren (Ed.): Global Ecology. Readings toward a Rational Strategy for Man, New York etc. 1971, S. 85–88. The author became Director of the White House Office of Science and Technology Policy in 2009.
  57. R. Döpel, "Über die geophysikalische Schranke der industriellen Energieerzeugung." Wissenschaftl. Zeitschrift der Technischen Hochschule Ilmenau, ISSN   0043-6917, Bd. 19 (1973, H.2), 37–52. online.
  58. H. Arnold, "Robert Döpel and his Model of Global Warming. An Early Warning – and its Update." Universitätsverlag Ilmenau (Germany) 2013. ISBN   978-3-86360 063-1 online
  59. Chaisson E. J. (2008). "Long-Term Global Heating from Energy Usage". Eos. 89 (28): 253–260. Bibcode:2008EOSTr..89..253C. doi: 10.1029/2008eo280001 .
  60. Flanner, M. G. (2009). "Integrating anthropogenic heat flux with global climate models". Geophys. Res. Lett. 36 (2): L02801. Bibcode:2009GeoRL..36.2801F. doi:10.1029/2008GL036465.
  61. Manabe S.; Wetherald R. T. (1975). "The Effects of Doubling the CO2 Concentration on the Climate of a General Circulation Model". Journal of the Atmospheric Sciences. 32 (3): 3–15. Bibcode:1975JAtS...32....3M. doi: 10.1175/1520-0469(1975)032<0003:teodtc>2.0.co;2 .
  62. "Declaration of the World Climate Conference" (PDF). World Meteorological Organization. Retrieved 28 June 2009.
  63. National Research Council (1979). Carbon Dioxide and Climate:A Scientific Assessment. Washington, D.C.: The National Academies Press. doi:10.17226/12181. ISBN   978-0-309-11910-8.
  64. Hansen, J.; et al. (1981). "Climate impact of increasing atmospheric carbon dioxide". Science. 231 (4511): 957–966. Bibcode:1981Sci...213..957H. doi:10.1126/science.213.4511.957. PMID   17789014. S2CID   20971423.
  65. Dansgaard W.; et al. (1982). "A New Greenland Deep Ice Core". Science. 218 (4579): 1273–77. Bibcode:1982Sci...218.1273D. doi:10.1126/science.218.4579.1273. PMID   17770148. S2CID   35224174.
  66. World Meteorological Organisation (WMO) (1986). "Report of the International Conference on the assessment of the role of carbon dioxide and of other greenhouse gases in climate variations and associated impacts". Villach, Austria. Archived from the original on 21 November 2013. Retrieved 28 June 2009.
  67. Lorius Claude; et al. (1985). "A 150,000-Year Climatic Record from Antarctic Ice". Nature. 316 (6029): 591–596. Bibcode:1985Natur.316..591L. doi:10.1038/316591a0. S2CID   4368173.
  68. "Statement of Dr. James Hansen, Director, NASA Goddard Institute for Space Studies" (PDF). The Guardian . London. Retrieved 28 June 2009.
  69. WMO (World Meteorological Organization) (1989). The Changing Atmosphere: Implications for Global Security, Toronto, Canada, 27–30 June 1988: Conference Proceedings (PDF). Geneva: Secretariat of the World Meteorological Organization. Archived from the original (PDF) on 2012-06-29.
  70. Brown, Dwayne; Cabbage, Michael; McCarthy, Leslie; Norton, Karen (20 January 2016). "NASA, NOAA Analyses Reveal Record-Shattering Global Warm Temperatures in 2015". NASA . Retrieved 21 January 2016.
  71. "IPCC - Intergovernmental Panel on Climate Change". ipcc.ch.
  72. Mathematics Subject Classification 2000