Mathematics, science, technology and engineering of the Victorianera refers to the development of mathematics, science, technology and engineering during the reign of Queen Victoria.
Founded in 1799 with the stated purpose of "diffusing the Knowledge, and facilitating the general Introduction, of Useful Mechanical Inventions and Improvements; and for teaching, by Courses of Philosophical Lectures and Experiments, the application of Science to the common Purposes of Life," the Royal Institution was a proper scientific institution with laboratories, a lecture hall, libraries, and offices. In its first years, the Institution was dedicated to the improvement of agriculture using chemistry, prompted by trade restrictions with Europe. Such practical concerns continued through the next two centuries. However, it soon became apparent that additional funding was required in order for the Institution to continue. Some well-known experts were hired as lecturers and researchers. The most successful of them all was Sir Humphry Davy, whose lectures concerned a myriad of topics and were so popular that the original practical purpose of the Institution faded away. It became increasingly dominated by research in basic science. [1]
The professionalisation of science began in the aftermath of the French Revolution and soon spread to other parts of the Continent, including the German lands. It was slow to reach Britain, however. Master of Trinity College William Whewell coined the term scientist in 1833 to describe the new professional breed of specialists and experts studying what was still commonly known as natural philosophy. [2] In 1840, Whewell wrote, "We need very much a name to describe a cultivator of science in general. I should incline to call him a Scientist." The new term signalled the recognition of the importance of empiricism and inductive reasoning. [3] But this term was slow to catch on. As biologist Thomas Huxley indicated in 1852, the prospect of earning a decent living as a scientist remained remote despite the prestige of the occupation. It was possible for a scientist to "earn praise but not pudding," he wrote. Since its birth, the Royal Society of London had been a club of gentlemanly amateurs, though some of whom were the very best in their fields, people like Charles Darwin and James Prescott Joule. But the Society reformed itself in the 1830s and 1840s. By 1847, it only admitted the new breed of professionals. [2]
The Victorians were impressed by science and progress and felt that they could improve society in the same way as they were improving technology. Britain was the leading world centre for advanced engineering and technology. Its engineering firms were in worldwide demand for designing and constructing railways. [4] [5]
A necessary part of understanding scientific progress is the ease of scientific discovery. In many cases, from planetary science to mammalian biology, the ease of discovery since the 1700s and 1800s can be fitted to an exponentially decaying curve. But the rate of progress is also dependent on other factors, such as the number of researchers, the level of funding, and advances in technology. Thus the number of new species of mammals discovered between the late 1700s and late 1800s followed grew exponentially before leveling off in the 1900s; the general shape is known as the logistic curve. In other cases, a branch of study reached the point of saturation. For instance, the last major internal human organ, the parathyroid gland, was discovered in 1880 by Ivar Viktor Sandström. [6]
This does not mean that basic science was coming an end. Despite the despondency of many Victorian-era scientists, who thought that all that remained was measuring quantities to the next decimal place and that new discoveries would not change the contemporary scientific paradigm, as the nineteenth century became the twentieth, science witnessed truly revolutionary discoveries, such as radioactivity, and basic science continued its advance, though a number of twentieth-century scientists shared the same pessimism as their late-Victorian counterparts. [7]
In the field of statistics, the nineteenth century saw significant innovations in data visualisation. William Playfair, who created charts of all sorts, justified it thus, "a man who has carefully investigated a printed table, finds, when done, that he has only a very faint and partial idea of what he has read; and that like a figure imprinted on sand, is soon totally erased and defaced." For example, in a chart showing the relationship between population and government revenue of some European nations, he used the areas of circles to represent the geographical sizes of those nations. In the same graph he used the slopes of lines to indicate the tax burden of a given population. While serving as nurse during the Crimean War, Florence Nightingale drew the first pie charts representing the monthly fatality rates of the conflict, distinguishing deaths due to battle wounds (innermost section), those due to infectious disease (outer section), and to other causes (middle section). (See figure.) Her charts clearly showed that most deaths resulted from disease, which led the general public to demand improved sanitation at field hospitals. Although bar charts representing frequencies were first used by the Frenchman A. M. Guerry in 1833, it was the statistician Karl Pearson who gave them the name histograms . Pearson used them in an 1895 article mathematically analyzing biological evolution. One such histogram showed that buttercups with large numbers of petals were rarer. [8]
Normal distributions, expressible in the form , arose in various works on probability and the theory of errors. Belgian sociologist and statistician Adolphe Quetelet discovered that its extremely wide applicability in his analysis of vast amounts of statistics of human physical characteristics such as height and other traits such as criminality and alcoholism. Quetelet derived the concept of the "average man" from his studies. Sir Francis Galton employed Quetelet's ideas in his research on mathematical biology. In his experiments with sweet peas in the 1870s, Galton discovered that the spread of the distributions of a particular trait did not change over the generations. He invented what he called the "quincunx" to demonstrate why mixtures of normal distributions were normal. Galton noticed that the means of a particular trait in the offspring generation differed from those of the parent generation, a phenomenon now known as regression to the mean. He found that the slopes of the regression lines of two given variables were the same if the two data sets were scaled by units of probable error and introduced the notion of the correlation coefficient, but noted that correlation does not imply causation. [8]
During the late nineteenth century, British statisticians introduced a number of methods to relate and draw conclusions from statistical quantities. Francis Edgeworth developed a test for statistical significance that estimated the "fluctuations"—twice the variance in modern language—from two given means. By modern standards, however, he was extremely conservative when it comes to drawing conclusions about the significance of an observation. For Edgeworth, an observation was significant if it was at the level of 0.005, which is much stricter than the requirement of 0.05 to 0.01 commonly used today. Pearson defined the standard deviation and introduced the -statistic (chi-squared). Pearson's student, George Udney Yule, demonstrated that one could compute the regression equation of a given data set using the method of least squares. [8]
In 1828, miller and autodidactic mathematician George Green published An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism , making use of the mathematics of potential theory developed by Continental mathematicians. But this paper fell on deaf ears until William Thomson read it, realised its significance, and had it re-printed in 1850. Green's work became a source of inspiration for the Cambridge school of mathematical physicists, which included Thomson himself, George Gabriel Stokes, and James Clerk Maxwell. Green's Essay contained what became known as Green's theorem, a basic result in vector calculus, Green's identities, and the notion of Green's functions, which appears in the study of differential equations. [9] [10] Thomson went on to prove Stokes' theorem, which earned that name after Stokes asked students to prove it in the Smith's Prize exam in 1854. Stokes learned it from Thomson in a letter in 1850. Stokes' theorem generalises Green's theorem, which itself is a higher-dimensional version of the Fundamental Theorem of Calculus. [10] [11] Research in physics—in particular elasticity, heat conduction, hydrodynamics, and electromagnetism—motivated the development of vector calculus in the nineteenth century. [9] [11]
Arthur Cayley is credited with the creation of the theory of matrices—rectangular arrays of numbers—as distinct objects from determinants, studied since the mid-eighteenth century. The term matrix was coined by James Joseph Sylvester, a major contributor to the theory of determinants. It is difficult to overestimate the value of matrix theory to modern theoretical physics. Peter Tait wrote, prophetically, that Cayley was "forging the weapons for future generations of physicists." [12]
Early contributions study of elasticity—how objects behave under stresses, pressures, and loads— employed ad hoc hypotheses to solve specific problems. It was during the nineteenth century that scientists began to work out a thorough theory. In 1821, using an analogy with elastic bodies, French professor of mechanics Claude-Louis Navier arrived at the basic equations of motion for viscous fluids. George Gabriel Stokes re-derived them in 1845 using continuum mechanics in a paper titled "On the Theories of Internal Friction of Fluids in Motion." In it, Stokes sought to develop a mathematical description for all known fluids that take into account viscosity, or internal friction. These are now referred to as the Navier–Stokes equations. [13]
In 1852, Stokes showed that light polarisation can be described in terms of what are now known as the Stokes parameters. The Stokes parameters for a given wave may be viewed as a vector. [14]
Founded in the eighteenth century, the calculus of variations grew into a much favored mathematical tool among physicists. Scientific problems thus became the impetus for the development of the subject. William Rowan Hamilton advanced it in his course to construct a deductive framework for optics; he then applied the same ideas to mechanics. [15] With an appropriate variational principle, one could deduce the equations of motion for a given mechanical or optical system. Soon, scientists worked out the variational principles for the theory of elasticity, electromagnetism, and fluid mechanics (and, in the future, relativity and quantum theory). Whilst variational principles did not necessarily provide a simpler way to solve problems, they were of interest for philosophical or aesthetic reasons, though scientists at this time were not as motivated by religion in their work as their predecessors. [15] Hamilton's work in physics was great achievement; he was able to provide a unifying mathematical framework for wave propagation and particle motion. [16] In light of this description, it becomes clear why the wave and corpuscle theories of light were equally able to account for the phenomena of reflection and refraction. [17] Hamilton's equations also proved useful in calculating planetary orbits. [16]
In 1845, John James Waterson submitted to the Royal Society a paper on the kinetic theory of gases that included a statement of the equipartition theorem and a calculation of the ratio of the specific heats of gases. Although the paper was read before the Society and its abstract published, Waterson's paper faced antipathy. At this time, the kinetic theory of gases was considered highly speculative as it was based on the then not-accepted atomic hypothesis. [18] But by the mid-1850s, interest was revived. In the 1860s, James Clerk Maxwell published a series of papers on the subject. Unlike those of his predecessors, who were only using averages, Maxwell's papers were explicitly statistical in nature. He proposed that the speeds of molecules in a gas followed a distribution. Although the speeds would cluster around the average, some molecules were moving faster or slower than this average. He showed that this distribution is a function of temperature and mathematically described various properties of gases, such as diffusion and viscosity. He predicted, surprisingly, that the viscosity of a gas is independent of its density. This was verified at once by a series of experiments Maxwell conducted with his wife, Katherine. Experimental verification of the Maxwell distribution was not obtained till 60 years later, however. In the meantime, the Austrian Ludwig Boltzmann developed Maxwell's statistics further and proved, in 1872, using the "-function," that the Maxwellian distribution is stable and any non-Maxwellian distribution would morph into it. [19]
In his Dynamics of Rigid Bodies (1877), Edward John Routh noted the importance of what he called "absent coordinates," also known as cyclic coordinates or ignorable coordinates (following the terminology of E. T. Whittaker). Such coordinates are associated with conserved momenta and as such are useful in problem solving. [20] Routh also devised a new method for solving problems in mechanics. Although Routh's procedure does not add any new insights, it allows for more systematic and convenient analysis, especially in problems with many degrees of freedom and at least some cyclic coordinates. [21] [22]
In 1899, at the request the British Association for the Advancement of Science from the year before, Edmund Taylor Whittaker submitted his Report on the Progress of Solution to the Problem of Three Bodies. At that time, classical mechanics in general and the three-body problem in particular captured the imagination of many talented mathematicians, whose contributions Whittaker covered in his Report. Whittaker later incorporated the Report into his textbook titled Analytical Dynamics of Particles and Rigid Bodies (first edition 1907). It helped provide the scientific basis for the aerospace industry in the twentieth century. Despite its age, it remains in print in the early twenty-first century. [23]
During the 1830s and 1840s, traditional caloric theory of heat began losing favour to "dynamical" alternatives, which posit that heat is a kind of motion. Brewer and amateur scientist James Prescott Joule was one of the proponents of the latter. Joule's intricate experiments—the most successful of which involved heating water with paddle wheels—making full use of his skill in temperature control as a brewer, demonstrated decisively the reality of the "mechanical equivalent of heat." What would later become known as the "conservation of energy" was pursued by many other workers approaching the subject from a variety of backgrounds, from medicine and physiology to physics and engineering. Another notable contributor to this development was the German researcher Hermann von Helmholtz, who gave an essentially Newtonian, that is, mechanical, account. William Thomson (later Lord Kelvin) received the works of Joule and Helmholtz positively, embracing them as providing support for the emerging "science of energy." [18] In the late 1840s to the 1850s, Kelvin, his friend William John Macquorn Rankine, and the German Rudolf Clausius published a steady stream of papers concerning heat engines and an absolute temperature scale. Indeed, the commercial value of new science had already become apparent by this time; some businessmen were quite willing to offer generous financial support for researchers. Rankine spoke confidently of the new science of thermodynamics, a term Kelvin coined in 1854, whose fundamental principles came to be known as the First and Second Laws and whose core concepts were "energy" and "entropy." [2] Kelvin and Peter Guthrie Tait's Treatise on Natural Philosophy (1867) was an attempt to reformulate physics in terms of energy. Here, Kelvin and Tait introduced the phrase kinetic energy (instead of 'actual'), now in standard usage. The phrase potential energy was promoted by Rankine. [2]
On the practical side, the food-preserving effect of low temperatures had long been recognised. Natural ice was vigorously traded in the early nineteenth century, but it was inevitably in short supply, especially in Australia. During the eighteenth and nineteenth centuries, there was considerable commercial incentive to develop ever more effective refrigerators thanks to the expansion of agriculture in the Americas, Australia, and New Zealand and rapid urbanization in Western Europe. From the 1830s onward, refrigerators relied on the expansion of compressed air or the evaporation of a volatile liquid; evaporation became the basis of all modern refrigerator designs. Long-distance shipping of perishable foods, such as meat, boomed in the late 1800s. [24]
On the theoretical side, new refrigeration techniques were also of great value. From his absolute temperature scale, Lord Kelvin deduced the existence of absolute zero occurring at −273.15 °C. Scientists began trying to reach ever lower temperatures and to liquefy every gas they encountered. This paved the way for the development of low-temperature physics and the Third Law of Thermodynamics. [24]
This study of natural history was most powerfully advanced by Charles Darwin and his theory of evolution first published in his book On the Origin of Species in 1859.
Research in geology and evolutionary biology naturally led to the question of how old the Earth was. Indeed, between the mid-1700s to the mid-1800s, this was the topic of increasingly sophisticated intellectual discussions. With the advent of thermodynamics, it became clear that the Earth and the Sun must have an old but finite age. Whatever the energy source of the Sun, it must be finite, and since it is constantly dissipating, there must be a day when the Sun runs out of energy. Lord Kelvin wrote in 1852, "...within a finite period of time past the earth must have been, and within a finite period of time to come the earth must again be, unfit for the habitation of man as at present constituted, unless operations have been, or are to be performed, which are impossible under the laws to which the known operations going on are subject." In the 1860s, Kelvin employed a mathematical model by von Helmholtz suggesting that the energy of the Sun is released via gravitational collapse to calculate the age of the Sun to be between 50 and 500 million years. He reached comparable figures for the Earth. The missing ingredient here was radioactivity, which was not known to science till the end of the nineteenth century. [2]
After the Dane Hans Christian Ørsted demonstrated that it was possible to deflect a magnetic needle by closing or opening an electric circuit nearby, a deluge of papers attempting explain the phenomenon was published. Michael Faraday set himself to the task of clarifying the nature of electricity and magnetism by experiments. In doing so, he devised what could be described as the first electric motor (though it does not resemble a modern one), a transformer (now used to step up the voltage and step down the current or vice versa), and a dynamo (which contains the basics of all electric turbine generators). [25] The practical value of Faraday's research on electricity and magnetism was nothing short of revolutionary. A dynamo converts mechanical energy into an electrical current whilst a motor does the reverse. The world's first power plants entered service in 1883, and by the following year, people realized the possibility of using electricity to power a variety of household appliances. Inventors and engineers soon raced to develop such items, starting with affordable and durable incandescent light bulbs, perhaps the most important of the early applications of electricity. [25]
As the foremost expert on electricity and magnetism at the time, Lord Kelvin oversaw the laying of the trans-Atlantic telegraphic cable, which became successful in 1866. [2] Drawing on the work of his predecessors, especially the experimental research of Michael Faraday, the analogy with heat flow by Lord Kelvin, and the mathematical analysis of George Green, James Clerk Maxwell synthesized all that was known about electricity and magnetism into a single mathematical framework, Maxwell's equations. [26] Maxwell used his equations to predict the existence of electromagnetic waves, which travel at the speed of light. In other words, light is but one kind of electromagnetic wave. Maxwell's theory predicted there ought to be other types, with different frequencies. After some ingenious experiments, Maxwell's prediction was confirmed by German physicist Heinrich Hertz. In the process, Hertz generated and detected what are now called radio waves and built crude radio antennas and the predecessors of satellite dishes. [27] Dutch physicist Hendrik Lorentz derived, using suitable boundary conditions, Fresnel's equations for the reflection and transmission of light in different media from Maxwell's equations. He also showed that Maxwell's theory succeeded in illuminating the phenomenon of light dispersion where other models failed. John William Strutt (Lord Rayleigh) and the American Josiah Willard Gibbs then proved that the optical equations derived from Maxwell's theory are the only self-consistent description of the reflection, refraction, and dispersion of light consistent with experimental results. Optics thus found a new foundation in electromagnetism. [26]
But it was Oliver Heaviside, an enthusiastic supporter of Maxwell's electromagnetic theory, who deserves most of the credit for shaping how people understood and applied Maxwell's work for decades to come. [28] Maxwell originally wrote down a grand total of 20 equations for the electromagnetic field, which he later reduced to eight. Heaviside rewrote them in the form commonly used today, just four expressions. In addition, Heaviside was responsible for considerable progress in electrical telegraphy, telephony, and the study of the propagation of electromagnetic waves. Independent of Gibbs, Heaviside assembled a set of mathematical tools known as vector calculus to replace the quaternions, which were in vogue at the time but which Heaviside dismissed as "antiphysical and unnatural." [28]
Faraday also investigated how electrical currents affected chemical solutions. His experiments led him to the two laws of electrochemistry. Together with Whewell, Faraday introduced the basic vocabulary for the subject, the words electrode, anode, cathode, electrolysis, electrolyte, ion, anion, and cation. They remain in standard usage. But Faraday's work was of value to more than just chemists. In his Faraday Memorial Lecture in 1881, the German Hermann von Helmholtz asserted that Faraday's laws of electrochemistry hinted at the atomic structure of matter. If the chemical elements were distinguishable from one another by simple ratios of mass, and if the same amounts of electricity deposited amounts of these elements upon the poles in ratios, then electricity must also come in as discrete units, later named electrons. [25]
In the late nineteenth century, the nature of the energy emitted by the discharge between high-voltage electrodes inside an evacuated tube—cathode rays—attracted the attention of many physicists. While the Germans thought cathode rays were waves, the British and the French believed they were particles. Working at the Cavendish Laboratory, established by Maxwell, J. J. Thomson directed a dedicate experiment demonstrating that cathode rays were in fact negatively charged particles, now called electrons. The experiment enabled Thompson to calculate the ratio between the magnitude of the charge and the mass of the particle (). In addition, because the ratio was the same regardless of the metal used, Thompson concluded that electrons must be a constituent of all atoms. Although the atoms of each chemical elements have different numbers of electrons, all electrons are identical. [29]
Inspired by the explorations in abstract algebra of George Peacock and Augustus de Morgan, George Boole published a book titled An Investigation of the Laws of Thought (1854), in which he brought the study of logic from philosophy and metaphysics to mathematics. His stated goal was to "investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of logical and construct its methods." Although ignored at first, Boolean algebra, as it is now known, became central to the design of circuits and computers in the following century. [30]
The desire to construct calculating machines is not new. In fact, it can be traced all the way back to the Hellenistic Civilization. While people have devised such machines over the centuries, mathematicians continued to perform calculations by hand, as machines offered little advantage in speed. For complicated calculations, they employed tables, especially of logarithmic and trigonometric functions, which were computed by hand. But right in the middle of the Industrial Revolution in England, Charles Babbage thought of using the all-important steam engine to power a mechanical computer, the Difference Engine. Unfortunately, whilst Babbage managed to secure government funds for the construction of the machine, the government subsequently lost interest and Babbage faced considerable troubles developing the necessary machine components. He abandoned the project to pursue a new one, his Analytical Engine. By 1838, he had worked out the basic design. Like a modern computer, it consisted of two basic parts, one that stores the numbers to be processed (the store), and one that performed the operations (the mill). Babbage adopted the concept of punch cards from the French engineer Joseph Jacquard, who had used it to automate the textile industry in France, to control the operations of his Analytical Engine. Unfortunately, he again lacked the financial resources to build it, and so it remained a theoretical construct. But he did leave behind detailed notes and engineering drawings, from which modern experts conclude that the technology of the time was advanced enough to actually build it, even if he never had enough money to do so. [31]
In 1840, Babbage went to Turin to give lectures on his work designing the Analytical Engine to Italian scientists. Ada Lovelace translated the notes published by one of the attendees into English and heavily annotated it. She wrote down the very first computer program, in her case one for computing the Bernoulli numbers. She employed what modern computer programmers would recognise as loops and decision steps, and gave a detailed diagram, possibly the first flowchart ever created. [31]
She noted that a calculating machine could perform not just arithmetic operations but also symbolic manipulations. On the limitations and implications of the computer, she wrote, [31]
...the Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with... But it is likely to exert an indirect and reciprocal influence on science itself in another manner. For, in so distributing and combining the truths and the formulas of analysis, that they may become most easily and rapidly amendable to the mechanical combinations of the engine, the relations and the nature of many subjects in that science are necessarily thrown into new lights, and more profoundly investigated... It is however pretty evident, on general principles, that in devising for mathematical truths a new form in which to record and throw themselves out for actual use, views are likely to be induced, which should again react on the more theoretical phase of the subject.
Steam ships were one of the keys to Britain's prosperity in the nineteenth century. This technology, which predates the Victorian era, had a long a rich history. Starting in the late 1700s, people had begun building steam-powered ships with ever increasing size, operational range, and speed, first to cross the English Channel and then the Atlantic and finally to reach places as far away as India and Australia without having to refuel mid-route. International trade and travel boosted demand, and there was intense competition among the shipping companies. [32] Steam ships such as the SS Great Britain and SS Great Western made international travel more common but also advanced trade, so that in Britain it was not just the luxury goods of earlier times that were imported into the country but essentials and raw materials such as corn and cotton from the United States and meat and wool from Australia.
At 693 feet long, 120 feet wide and weighing over 18,900 tons, the SS Great Eastern was the largest ship built at the time, capable of transporting 4,000 passengers from Britain to Australia without having to refuel along the way. Even when she was finally broken up for scraps in 1888, she was still the largest ship in the world. Her record was not broken till the Edwardian era with super liners like the Lusitania in 1907, the Titanic in 1912. Yet despite being a remarkable feat of engineering, the Great Eastern became more and more of a white elephant as smaller and faster ships were in greater demand. Nevertheless, she gained a new lease of life when she was chartered to lay telegraphic cables across the Atlantic, and then to India. Her size and range made her ideally suited for the task. [32]
The British government had long realised that national prosperity depended on trade. For that reason, it deployed the Royal Navy to protect maritime trade routes and financed the construction of many steam ships. [32]
Although the idea of transmitting messages via electrical signals dated back to the eighteenth century, it was not until the 1820s that advances in the study of electricity and magnetism made that a practical reality. In 1837, William Fothergill Cooke and Charles Wheatstone invented a telegraphic system that used electrical currents to deflect magnetic needles, thus transmitting coded messages. This design soon made its way all across Britain, appearing in every town and post office. By the mid-1800s, a telegraphic cable was laid across the English Channel, the Irish Sea, and the North Sea. In 1866, the SS Great Eastern successfully laid the transatlantic telegraphic cable. A global network boomed towards the end of the century. [32]
In 1876, Alexander Graham Bell patented the telephone. Like the telegraph, the telephone enabled rapid personal communication. A little over a decade later, 26,000 telephones were in service in Britain (and 150,000 in America). Multiple switchboards were installed in every major town and city. [32]
Hertz's experimental work in electromagnetism stimulated interest in the possibility of wireless communication, which did not require long and expensive cables and was faster than even the telegraph. Receiving little support in his native Italy, Guglielmo Marconi moved to England and adapted Hertz's equipment for this purpose in the 1890s. He achieved the first international wireless transmission between England and France in 1900 and by the following year, he succeeded in sending messages in Morse code across the Atlantic. Seeing its value, the shipping industry adopted this technology at once. Radio broadcasting became extremely popular in the twentieth century and remains in common use in the early twenty-first. [27] In fact, the global communications network of the twenty-first century has its roots in the Victorian era. [32]
Photography was realised in 1839 by Louis Daguerre in France and William Fox Talbot in Britain. By 1889, hand-held cameras were available. [33]
Another important innovation in communications was the Penny Black, the first postage stamp, which standardised postage to a flat price regardless of distance sent.
A central development during the Victorian era was the rise of rail transport. The new railways all allowed goods, raw materials, and people to be moved about, rapidly facilitating trade and industry. The financing of railways became an important specialty of London's financiers. [34] They retained an ownership share even while turning over management to locals; that ownership was largely liquidated in 1914–1916 to pay for the World War. Railroads originated in England because industrialists had already discovered the need for inexpensive transportation to haul coal for the new steam engines, to supply parts to specialized factories, and to take products to market. The existing system of canals was inexpensive but was too slow and too limited in geography. [35] The railway system led to a reorganisation of society more generally, with "railway time" being the standard by which clocks were set throughout Britain; the complex railway system setting the standard for technological advances and efficiency.
The engineers and businessmen needed to create and finance a railway system were available; they knew how to invent, to build, and to finance a large complex system. The first quarter of the 19th century involved numerous experiments with locomotives and rail technology. By 1825 railways were commercially feasible, as demonstrated by George Stephenson (1791–1848) when he built the Stockton and Darlington. On his first run, his locomotive pulled 38 freight and passenger cars at speeds as high as 12 miles per hour. Stephenson went on to design many more railways and is best known for standardizing designs, such as the "standard gauge" of rail spacing, at 4 feet 81⁄2 inches. [36]
Thomas Brassey (1805–70) was even more prominent, operating construction crews that at one point in the 1840s totalled 75,000 men throughout Europe, the British Empire, and Latin America. [37] Brassey took thousands of British engineers and mechanics across the globe to build new lines. They invented and improved thousands of mechanical devices, and developed the science of civil engineering to build roadways, tunnels and bridges. [38] Britain had a superior financial system based in London that funded both the railways in Britain and also in many other parts of the world, including the United States, up until 1914. The boom years were 1836 and 1845–47 when Parliament authorised 8,000 miles of lines at a projected cost of £200 million, which was about the same value as the country's annual Gross Domestic Product (GDP) at that time. A new railway needed a charter, which typically cost over £200,000 (about $1 million) to obtain from Parliament, but opposition could effectively prevent its construction. The canal companies, unable or unwilling to upgrade their facilities to compete with railways, used political power to try to stop them. The railways responded by purchasing about a fourth of the canal system, in part to get the right of way, and in part to buy off critics. Once a charter was obtained, there was little government regulation, as laissez-faire and private ownership had become accepted practices. [39]
The different lines typically had exclusive territory, but given the compact size of Britain, this meant that multiple competing lines could provide service between major cities. George Hudson (1800–1871) became the "railway king" of Britain. He merged various independent lines and set up a "Clearing House" in 1842 which rationalized interconnections by establishing uniform paperwork and standard methods for transferring passengers and freight between lines, and rates when one system used freight cars owned by another. By 1850, rates had fallen to a penny a ton mile for coal, at speeds of up to fifty miles an hour. Britain now had had the model for the world in a well integrated, well-engineered system that allowed fast, cheap movement of freight and people, and which could be replicated in other major nations.
The railways directly or indirectly employed tens of thousands of engineers, mechanics, repairmen and technicians, as well as statisticians and financial planners. They developed new and more efficient and less expensive techniques. Most important, they created a mindset of how technology could be used in many different forms of business. Railways had a major impact on industrialization. By lowering transportation costs, they reduced costs for all industries moving supplies and finished goods, and they increased demand for the production of all the inputs needed for the railroad system itself. By 1880, there were 13,500 locomotives which each carried 97,800 passengers a year, or 31,500 tons of freight. [40]
Member of Parliament and Solicitor to the City of London Charles Pearson campaigned for an underground rail service in London. Parts of the first such railway, the Metropolitan Line, opened to the public in 1863, thereby becoming the first subway line in the world. Trains were originally steam-powered, but in 1890, the first electric trains entered service. That same year, the whole system became officially known as the Tube after the shape of the rail tunnels. (It was not until 1908 that the name London Underground was introduced.) [41]
India provides an example of the London-based financiers pouring money and expertise into a very well built system designed for military reasons (after the Mutiny of 1857), and with the hope that it would stimulate industry. The system was overbuilt and much too elaborate and expensive for the small amount of freight traffic it carried. However, it did capture the imagination of the Indians, who saw their railways as the symbol of an industrial modernity—but one that was not realised until a century or so later. [42]
A gas network for lighting and heating was introduced in the 1880s. [43] The model town of Saltaire was founded, along with others, as a planned environment with good sanitation and many civic, educational and recreational facilities, although it lacked a pub, which was regarded as a focus of dissent. Although initially developed in the early years of the 19th century, gas lighting became widespread during the Victorian era in industry, homes, public buildings and the streets. The invention of the incandescent gas mantle in the 1890s greatly improved light output and ensured its survival as late as the 1960s. Hundreds of gasworks were constructed in cities and towns across the country. In 1882, incandescent electric lights were introduced to London streets, although it took many years before they were installed everywhere.[ citation needed ]
Medicine progressed during Queen Victoria's reign. In fact, medicine at the start of the nineteenth century was little different from that in the medieval era whereas by the end of the century, it became a lot closer to twenty-first century practice thanks to advances in science, especially microbiology, paving the way for the germ theory of disease. This was during the height of the Industrial Revolution, and urbanisation occurred at a frantic pace. As the population density of the cities grew, epidemics of cholera, smallpox, tuberculosis, and typhus were commonplace. [44]
After studying previous outbreaks, physician John Snow drew the conclusion that cholera was a water-borne disease. When the 1854 broke out, Snow mapped the locations of the cases in Soho, London, and found that they centered around a well he deemed contaminated. He asked that the pump's handle be removed, after which the epidemic petered out. Snow also discovered that households whose water supplies came from companies that used the Thames downstream, after many sewers had flown into the river, were fourteen times more likely to die from cholera. He thus recommended boiling water before use. [44]
Sanitation reforms, prompted by the Public Health Acts 1848 and 1869, were made in the crowded, dirty streets of the existing cities, and soap was the main product shown in the relatively new phenomenon of advertising. A great engineering feat in the Victorian Era was the sewage system in London. It was designed by Joseph Bazalgette in 1858. He proposed to build 82 mi (132 km) of sewer system linked with over 1,000 mi (1,600 km) of street sewers. Many problems were encountered but the sewers were completed. After this, Bazalgette designed the Thames Embankment which housed sewers, water pipes and the London Underground. During the same period, London's water supply network was expanded and improved. [43]
John Simon, as chief medical officer of the General Board of Health, secured funds for research into various common infectious diseases at the time, including cholera, diphtheria, smallpox, and typhus. Using his political influence, he garnered support for the Public Health Act of 1875, which focused on preventative measures in housing, the water supply, sewage and drainage, providing Britain with an extensive public health system. [44]
By mid-century, the stethoscope became an oft-used device and designs of the microscope had advanced enough for scientists to closely examine pathogens. The pioneering work of French microbiologist Louis Pasteur from the 1850s earned widespread acceptance for the germ theory of disease. [44] It led to the introduction antiseptics by Joseph Lister in 1867 in the form of carbolic acid (phenol). [45] He instructed the hospital staff to wear gloves and wash their hands, instruments, and dressings with a phenol solution and in 1869, he invented a machine that would spray carbolic acid in the operating theatre during surgery. [45] Infection-related deaths fell noticeably as a result. [44]
As the British Empire expanded, Britons found themselves facing novel climates and contagions; there was active research into tropical diseases. In 1898, Ronald Ross proved that the mosquito was responsible for spreading malaria. [44]
Although nitrous oxide, or laughing gas, had been proposed as an anaesthetic as far back as 1799 by Humphry Davy, it was not until 1846 when an American dentist named William Morton started using ether on his patients that anaesthetics became common in the medical profession. [46] In 1847 chloroform was introduced as an anaesthetic by James Young Simpson. [47] Chloroform was favoured by doctors and hospital staff because it is much less flammable than ether, but critics complained that it could cause the patient to have a heart attack. [47] Chloroform gained in popularity in England and Germany after John Snow gave Queen Victoria chloroform for the birth of her eighth child (Prince Leopold). [48] By 1920, chloroform was used in 80 to 95% of all narcoses performed in the UK and German-speaking countries. [47] A combination of antiseptics and anaesthetics helped surgeons operate more carefully and comfortably on their patients. [44]
Anaesthetics made painless dentistry possible. At the same time sugar consumption in the British diet increased, greatly increasing instances of tooth decay. [49] As a result, more and more people were having teeth extracted and needing dentures. This gave rise to "Waterloo Teeth", which were real human teeth set into hand-carved pieces of ivory from hippopotamus or walrus jaws. [49] [50] The teeth were obtained from executed criminals, victims of battlefields, from grave-robbers, and were even bought directly from the desperately impoverished. [49]
The increase in tooth decay also brought the first prominent recommendation for fluoride as a nutrient, particularly in pregnancy and childhood, in 1892. [51]
News of the discovery of X-rays in 1895 spread like wildfire. Its medical value was realised immediately, and within a year, doctors were prescribing X-rays for diagnosis, in particular to locate bone fractures and foreign objects inside the patient's body. Radioactivity was discovered 1896, and was later used to treat cancer. [44]
During the second half of the nineteenth century, British medical doctors became increasingly specialised, following the footsteps of their German counterparts, and more hospitals were built. Surgeons began wearing gowns in the operating room and doctors white coats and stethoscopes, sights that are common in the early twenty-first century. [44]
Yet despite all the aforementioned medical advances, the mortality rate fell only marginally, from 20.8 per thousand in 1850 to 18.2 by the end of the century. Urbanisation aided the spread of diseases and squalid living conditions in many places exacerbated the problem. Moreover, while some diseases, such as cholera, were being driven out, others, such as sexually transmitted diseases, made themselves felt. [44]
In physics, electromagnetism is an interaction that occurs between particles with electric charge via electromagnetic fields. The electromagnetic force is one of the four fundamental forces of nature. It is the dominant force in the interactions of atoms and molecules. Electromagnetism can be thought of as a combination of electrostatics and magnetism, which are distinct but closely intertwined phenomena. Electromagnetic forces occur between any two charged particles. Electric forces cause an attraction between particles with opposite charges and repulsion between particles with the same charge, while magnetism is an interaction that occurs between charged particles in relative motion. These two forces are described in terms of electromagnetic fields. Macroscopic charged objects are described in terms of Coulomb's law for electricity and Ampère's force law for magnetism; the Lorentz force describes microscopic charged particles.
Physics is a branch of science whose primary objects of study are matter and energy. Discoveries of physics find applications throughout the natural sciences and in technology. Historically, physics emerged from the scientific revolution of the 17th century, grew rapidly in the 19th century, then was transformed by a series of discoveries in the 20th century. Physics today may be divided loosely into classical physics and modern physics.
Maxwell's equations, or Maxwell–Heaviside equations, are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, electric and magnetic circuits. The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar, etc. They describe how electric and magnetic fields are generated by charges, currents, and changes of the fields. The equations are named after the physicist and mathematician James Clerk Maxwell, who, in 1861 and 1862, published an early form of the equations that included the Lorentz force law. Maxwell first used the equations to propose that light is an electromagnetic phenomenon. The modern form of the equations in their most common formulation is credited to Oliver Heaviside.
Oliver Heaviside FRS was an English self-taught mathematician and physicist who invented a new technique for solving differential equations, independently developed vector calculus, and rewrote Maxwell's equations in the form commonly used today. He significantly shaped the way Maxwell's equations are understood and applied in the decades following Maxwell's death. His formulation of the telegrapher's equations became commercially important during his own lifetime, after their significance went unremarked for a long while, as few others were versed at the time in his novel methodology. Although at odds with the scientific establishment for most of his life, Heaviside changed the face of telecommunications, mathematics, and science.
William Thomson, 1st Baron Kelvin was a British mathematician, mathematical physicist and engineer. Born in Belfast, he was the professor of Natural Philosophy at the University of Glasgow for 53 years, where he undertook significant research and mathematical analysis of electricity, was instrumental in the formulation of the first and second laws of thermodynamics, and contributed significantly to unifying physics, which was then in its infancy of development as an emerging academic discipline. He received the Royal Society's Copley Medal in 1883 and served as its president from 1890 to 1895. In 1892, he became the first scientist to be elevated to the House of Lords.
Baron Siméon Denis Poisson FRS FRSE was a French mathematician and physicist who worked on statistics, complex analysis, partial differential equations, the calculus of variations, analytical mechanics, electricity and magnetism, thermodynamics, elasticity, and fluid mechanics. Moreover, he predicted the Arago spot in his attempt to disprove the wave theory of Augustin-Jean Fresnel.
Mathematical physics refers to the development of mathematical methods for application to problems in physics. The Journal of Mathematical Physics defines the field as "the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories". An alternative definition would also include those mathematics that are inspired by physics, known as physical mathematics.
In physics, action at a distance is the concept that an object's motion can be affected by another object without being in physical contact with it; that is, the non-local interaction of objects that are separated in space. Coulomb's law and Newton's law of universal gravitation are based on action at a distance.
Experimental physics is the category of disciplines and sub-disciplines in the field of physics that are concerned with the observation of physical phenomena and experiments. Methods vary from discipline to discipline, from simple experiments and observations, such as Galileo's experiments, to more complicated ones, such as the Large Hadron Collider.
A Treatise on Electricity and Magnetism is a two-volume treatise on electromagnetism written by James Clerk Maxwell in 1873. Maxwell was revising the Treatise for a second edition when he died in 1879. The revision was completed by William Davidson Niven for publication in 1881. A third edition was prepared by J. J. Thomson for publication in 1892.
"A Dynamical Theory of the Electromagnetic Field" is a paper by James Clerk Maxwell on electromagnetism, published in 1865. In the paper, Maxwell derives an electromagnetic wave equation with a velocity for light in close agreement with measurements made by experiment, and deduces that light is an electromagnetic wave.
Faraday's law of induction is a law of electromagnetism predicting how a magnetic field will interact with an electric circuit to produce an electromotive force (emf). This phenomenon, known as electromagnetic induction, is the fundamental operating principle of transformers, inductors, and many types of electric motors, generators and solenoids.
Relativistic electromagnetism is a physical phenomenon explained in electromagnetic field theory due to Coulomb's law and Lorentz transformations.
A treatise is a formal and systematic written discourse on some subject concerned with investigating or exposing the principles of the subject and its conclusions. A monograph is a treatise on a specialized topic.
James Clerk Maxwell was a Scottish physicist with broad interests who was responsible for the classical theory of electromagnetic radiation, which was the first theory to describe electricity, magnetism and light as different manifestations of the same phenomenon. Maxwell's equations for electromagnetism have been called the "second great unification in physics" where the first one had been realised by Isaac Newton.
"On Physical Lines of Force" is a four-part paper written by James Clerk Maxwell, published in 1861. In it, Maxwell derived the equations of electromagnetism in conjunction with a "sea" of "molecular vortices" which he used to model Faraday's lines of force. Maxwell had studied and commented on the field of electricity and magnetism as early as 1855/56 when "On Faraday's Lines of Force" was read to the Cambridge Philosophical Society. Maxwell made an analogy between the density of this medium and the magnetic permeability, as well as an analogy between the transverse elasticity and the dielectric constant, and using the results of a prior experiment by Wilhelm Eduard Weber and Rudolf Kohlrausch performed in 1856, he established a connection between the speed of light and the speed of propagation of waves in this medium.
By the first half of the 19th century, the understanding of electromagnetics had improved through many experiments and theoretical work. In the 1780s, Charles-Augustin de Coulomb established his law of electrostatics. In 1825, André-Marie Ampère published his force law. In 1831, Michael Faraday discovered electromagnetic induction through his experiments, and proposed lines of forces to describe it. In 1834, Emil Lenz solved the problem of the direction of the induction, and Franz Ernst Neumann wrote down the equation to calculate the induced force by change of magnetic flux. However, these experimental results and rules were not well organized and sometimes confusing to scientists. A comprehensive summary of the electrodynamic principles was needed.
In the history of physics, the concept of fields had its origins in the 18th century in a mathematical formulation of Newton's law of universal gravitation, but it was seen as deficient as it implied action at a distance. In 1852, Michael Faraday treated the magnetic field as a physical object, reasoning about lines of force. James Clerk Maxwell used Faraday's conceptualisation to help formulate his unification of electricity and magnetism in his field theory of electromagnetism.
The 19th century in science saw the birth of science as a profession; the term scientist was coined in 1833 by William Whewell, which soon replaced the older term of (natural) philosopher.
A History of the Theories of Aether and Electricity is any of three books written by British mathematician Sir Edmund Taylor Whittaker FRS FRSE on the history of electromagnetic theory, covering the development of classical electromagnetism, optics, and aether theories. The book's first edition, subtitled from the Age of Descartes to the Close of the Nineteenth Century, was published in 1910 by Longmans, Green. The book covers the history of aether theories and the development of electromagnetic theory up to the 20th century. A second, extended and revised, edition consisting of two volumes was released in the early 1950s by Thomas Nelson, expanding the book's scope to include the first quarter of the 20th century. The first volume, subtitled The Classical Theories, was published in 1951 and served as a revised and updated edition to the first book. The second volume, subtitled The Modern Theories (1900–1926), was published two years later in 1953, extended this work covering the years 1900 to 1926. Notwithstanding a notorious controversy on Whittaker's views on the history of special relativity, covered in volume two of the second edition, the books are considered authoritative references on the history of electricity and magnetism as well as classics in the history of physics.