The history of military technology, including the military funding of science, has had a powerful transformative effect on the practice and products of scientific research since the early 20th century. Particularly since World War I, advanced science-based technologies have been viewed as essential elements of a successful military.
World War I is often called "the chemists' war", both for the extensive use of poison gas and the importance of nitrates and advanced high explosives. Poison gas, beginning in 1915 with chlorine from the powerful German dye industry, was used extensively by the Germans and the British; over the course of the war, scientists on both sides raced to develop more and more potent chemicals and devise countermeasures against the newest enemy gases. [1] Physicists also contributed to the war effort, developing wireless communication technologies and sound-based methods of detecting U-boats, resulting in the first tenuous long-term connections between academic science and the military. [2]
World War II marked a massive increase in the military funding of science, particularly physics. In addition to the Manhattan Project and the resulting atomic bomb, British and American work on radar was widespread and ultimately highly influential in the course of the war; radar enabled detection of enemy ships and aircraft, as well as the radar-based proximity fuze. Mathematical cryptography, meteorology, and rocket science were also central to the war effort, with military-funded wartime advances having a significant long-term effect on each discipline. The technologies employed at the end— jet aircraft, radar and proximity fuzes, and the atomic bomb—were radically different from pre-war technology; military leaders came to view continued advances in technology as the critical element for success in future wars. The advent of the Cold War solidified the links between military institutions and academic science, particularly in the United States and the Soviet Union, so that even during a period of nominal peace military funding continued to expand. Funding spread to the social sciences as well as the natural sciences. Emerging fields such as digital computing, were born of military patronage. Following the end of the Cold War and the dissolution of the Soviet Union, military funding of science has decreased substantially, but much of the American military-scientific complex remains in place.
The sheer scale of military funding for science since World War II has instigated a large body of historical literature analyzing the effects of that funding, especially for American science. Since Paul Forman's 1987 article "Behind quantum electronics: National security as a basis for physical research in the United States, 1940-1960," there has been an ongoing historical debate over precisely how and to what extent military funding affected the course of scientific research and discovery. [3] Forman and others have argued that military funding fundamentally redirected science—particularly physics—toward applied research, and that military technologies predominantly formed the basis for subsequent research even in areas of basic science; ultimately the very culture and ideals of science were colored by extensive collaboration between scientists and military planners. An alternate view has been presented by Daniel Kevles, that while military funding provided many new opportunities for scientists and dramatically expanded the scope of physical research, scientists by-and-large retained their intellectual autonomy.
While there were numerous instances of military support for scientific work before the 20th century, these were typically isolated instances; knowledge gained from technology was generally far more important for the development of science than scientific knowledge was to technological innovation. [4] Thermodynamics, for example, is a science partly born from military technology: one of the many sources of the first law of thermodynamics was Count Rumford's observation of the heat produced by boring cannon barrels. [5] Mathematics was important in the development of the Greek catapult and other weapons, [6] but analysis of ballistics was also important for the development of mathematics, while Galileo tried to promote the telescope as a military instrument to the military-minded Republic of Venice before turning it to the skies while seeking the patronage of the Medici court in Florence. [7] In general, craft-based innovation, disconnected from the formal systems of science, was the key to military technology well into the 19th century.
Even craft-based military technologies were not generally produced by military funding. Instead, craftsmen and inventors developed weapons and military tools independently and actively sought the interest of military patrons afterward. [8] Following the rise of engineering as a profession in the 18th century, governments and military leaders did try to harness the methods of both science and engineering for more specific ends, but frequently without success. In the decades leading up to the French Revolution, French artillery officers were often trained as engineers, and military leaders from this mathematical tradition attempted to transform the process of weapons manufacture from a craft-based enterprise to an organized and standardized system based on engineering principles and interchangeable parts (pre-dating the work of Eli Whitney in the U.S.). During the Revolution, even natural scientists participated directly, attempting to create “weapons more powerful than any we possess” to aid the cause of the new French Republic, though there were no means for the revolutionary army to fund such work. [9] Each of these efforts, however, was ultimately unsuccessful in producing militarily useful results. A slightly different outcome came from the longitude prize of the 18th century, offered by the British government for an accurate method of determining a ship's longitude at sea (essential for the safe navigation of the powerful British navy): intended to promote—and financially reward—a scientific solution, it was instead won by a scientific outsider, the clockmaker John Harrison. [10] However, the naval utility of astronomy did help increase the number of capable astronomers and focus research on developing more powerful and versatile instruments.
Through the 19th century, science and technology grew closer together, particularly through electrical and acoustic inventions and the corresponding mathematical theories. The late 19th and early 20th centuries witnessed a trend toward military mechanization, with the advent of repeating rifles with smokeless powder, long-range artillery, high explosives, machine guns, and mechanized transport along with telegraphic and later wireless battlefield communication. Still, independent inventors, scientists and engineers were largely responsible for these drastic changes in military technology (with the exception of the development of battleships, which could only have been created through organized large-scale effort). [11]
World War I marked the first large-scale mobilization of science for military purposes. Prior to the war, the American military ran a few small laboratories as well as the Bureau of Standards, but independent inventors and industrial firms predominated. [12] Similarly in Europe, military-directed scientific research and development was minimal. The powerful new technologies that led to trench warfare, however, reversed the traditional advantage of fast-moving offensive tactics; fortified positions supported by machine guns and artillery resulted in high attrition but strategic stalemate. Militaries turned to scientists and engineers for even newer technologies, but the introduction of tanks and aircraft had only a marginal impact; the use of poison gas made a tremendous psychological impact, but decisively favored neither side. The war ultimately turned on maintaining adequate supplies of materials, a problem also addressed by military-funded science—and, through the international chemical industry, closely related to the advent of chemical warfare.
The Germans introduced gas as a weapon in part because naval blockades limited their supply of nitrate for explosives, while the massive German dye industry could easily produce chlorine and organic chemicals in large amounts. Industrial capacity was completely mobilized for war, and Fritz Haber and other industrial scientists were eager to contribute to the German cause; soon they were closely integrated into the military hierarchy as they tested the most effective ways of producing and delivering weaponized chemicals. Though the initial impetus for gas warfare came from outside the military, further developments in chemical weapon technology might be considered military-funded, considering the blurring of lines between industry and nation in Germany. [13]
Following the first chlorine attack by the Germans in May 1915, the British quickly moved to recruit scientists for developing their own gas weapons. Gas research escalated on both sides, with chlorine followed by phosgene, a variety of tear gases, and mustard gas. A wide array of research was conducted on the physiological effects of other gases, such and hydrogen cyanide, arsenic compounds, and a host of complex organic chemicals. The British built from scratch what became an expansive research facility at Porton Down, which remains a significant military research institution into the 21st century. Unlike many earlier military-funded scientific ventures, the research at Porton Down did not stop when the war ended or an immediate goal was achieved. In fact, every effort was made to create an attractive research environment for top scientists, and chemical weapons development continued apace—though in secret—through the interwar years and into World War II. German military-backed gas warfare research did not resume until the Nazi era, following the 1936 discovery of tabun, the first nerve agent, through industrial insecticide research.
In the United States, the established tradition of engineering was explicitly competing with the rising discipline of physics for World War I military largess. A host of inventors, led by Thomas Edison and his newly created Naval Consulting Board, cranked out thousands of inventions to solve military problems and aid the war effort, while academic scientists worked through the National Research Council (NRC) led by Robert Millikan. Submarine detection was the most important problem that both the physicists and inventors hoped to solve, as German U-boats were decimating the crucial naval supply lines from the U.S. to England. Edison's Board produced very few useful innovations, but NRC research resulted in a moderately successful sound-based methods for locating submarines and hidden ground-based artillery, as well as useful navigational and photographic equipment for aircraft. Because of the success of academic science in solving specific military problems, the NRC was retained after the war's end, though it gradually decoupled from the military. [14]
Many industrial and academic chemists and physicists came under military control during the Great War, but post-war research by the Royal Engineers Experimental Station at Porton Down and the continued operation of the National Research Council were exceptions to the overall pattern; wartime chemistry funding was a temporary redirection of a field largely driven by industry and later medicine, while physics grew closer to industry than to the military. The discipline of modern meteorology, however, was largely built from military funding. During World War I, the French civilian meteorological infrastructure was largely absorbed into the military. The introduction of military aircraft during the war as well as the role of wind and weather in the success or failure of gas attacks meant meteorological advice was in high demand. The French army (among others) created its own supplementary meteorological service as well, retraining scientists from other fields to staff it. At war's end, the military continued to control French meteorology, sending weathermen to French colonial interests and integrating weather service with the growing air corps; most of the early-twentieth century growth in European meteorology was the direct result of military funding. [15] World War II would result in a similar transformation of American meteorology, initiating a transition from an apprenticeship system for training weathermen (based on intimate knowledge of local trends and geography) to the university-based, science-intensive system that has predominated since.
If World War I was the chemists' war, World War II was the physicists' war. As with other total wars, it is difficult to draw a line between military funding and more spontaneous military-scientific collaboration during World War II. Well before the Invasion of Poland, nationalism was a powerful force in the German physics community (see Deutsche Physik); the military mobilization of physicists was all but irresistible after the rise of National Socialism. German and Allied investigations of the possibility of a nuclear bomb began in 1939 at the initiative of civilian scientists, but by 1942 the respective militaries were heavily involved. The German nuclear energy project had two independent teams, a civilian-controlled team under Werner Heisenberg and a military-controlled led by Kurt Diebner; the latter was more explicitly aimed at producing a bomb (as opposed to a power reactor) and received much more funding from the Nazis, though neither was ultimately successful. [16]
In the U.S., the Manhattan Project and other projects of the Office of Scientific Research and Development resulted in a much more extensive military-scientific venture, the scale of which dwarfed previous military-funded research projects. Theoretical work by a number of British and American scientists resulted in significant optimism about the possibility of a nuclear chain reaction. As the physicists convinced military leaders of the potential of nuclear weapons, funding for actual development was ratcheted up rapidly. A number of large laboratories were created across the United States for work on different aspects of the bomb, while many existing facilities were reoriented to bomb-related work; some were university-managed while others were government-run, but all were ultimately funded and directed by the military. [17] The May 1945 surrender of Germany, the original intended target for the bomb, did virtually nothing to slow the project's momentum. After Japan's surrender immediately following the atomic bombings of Hiroshima and Nagasaki, many scientists returned to academia or industry, but the Manhattan Project infrastructure was too large—and too effective—to be dismantled wholesale; it became the model for future military-scientific work, in the U.S. and elsewhere. [18]
Other wartime physics research, particularly in rocketry and radar technology, was less significant in popular culture but much more significant for the outcome of the war. German rocketry was driven by the pursuit of Wunderwaffen , resulting in the V-2 ballistic missile; the technology as well as the personal expertise of the German rocketry community was absorbed by the U.S. and the U.S.S.R. rocket programs after the war, forming the basis of long-term military funded rocketry, ballistic missile, and later space research. Rocket science was only beginning to make impact by the final years of the war. German rockets created fear and destruction in London, but had only modest military significance, while air-to-ground rockets enhanced the power of American air strikes; jet aircraft also went into service by the end of the war. [19] Radar work before and during the war provided even more of an advantage for the Allies. British physicists pioneered long-wave radar, developing an effective system for detecting incoming German air forces. Work on potentially more precise short-wave radar was turned over to the U.S.; several thousand academic physicists and engineers not participating the Manhattan Project did radar work, particularly at MIT and Stanford, resulting in microwave radar systems that could resolve more detail in incoming flight formations. Further refinement of microwave technology led to proximity fuzes, which greatly enhanced the ability of the U.S. Navy to defend against Japanese bombers. Microwave production, detection and manipulation also formed the technical foundation to complement the institutional foundation of the Manhattan Project in much post-war defense research.
In the years immediately following World War II, the military was by far the most significant patron of university science research in the U.S., and the national labs also continued to flourish. [20] After two years in political limbo (but with work on nuclear power and bomb manufacture continuing apace) the Manhattan Project became a permanent arm of the government as the Atomic Energy Commission. The Navy—inspired by the success of military-directed wartime research—created its own R&D organization, the Office of Naval Research, which would preside over an expanded long-term research program at Naval Research Laboratory as well as fund a variety of university-based research. Military money following up the wartime radar research led to explosive growth in both electronics research and electronics manufacturing. [21] The Air Force became an independent service branch from the Army and established its own research and development system, and the Army followed suit (though it was less invested in academic science than the Navy or Air Force). Meanwhile, the perceived communist menace of the Soviet Union caused tensions—and military budgets—to escalate rapidly.
The Department of Defense primarily funded what has been broadly described as “physical research,” but to reduce this to merely chemistry and physics is misleading. Military patronage benefited a large number of fields, and in fact helped create a number of the modern scientific disciplines. At Stanford and MIT, for example, electronics, aerospace engineering, nuclear physics, and materials science —all physics, broadly speaking—each developed in different directions, becoming increasingly independent of parent disciplines as they grew and pursued defense-related research agendas. What began as interdepartmental laboratories became the centers for graduate teaching and research innovation thanks to the broad scope of defense funding. The need to keep up with corporate technology research (which was receiving the lion's share of defense contracts) also prompted many science labs to establish close relationships with industry. [22]
The complex histories of computer science and computer engineering were shaped, in the first decades of digital computing, almost entirely by military funding. Most of the basic component technologies for digital computing were developed through the course of the long-running Whirlwind-SAGE program to develop an automated radar shield. Virtually unlimited funds enabled two decades of research that only began producing useful technologies by the end of the 50s; even the final version of the SAGE command and control system had only marginal military utility. More so than with previously established disciplines receiving military funding, the culture of computer science was permeated with a Cold War military perspective. Indirectly, the ideas of computer science also had a profound effect on psychology, cognitive science and neuroscience through the mind-computer analogy. [23]
The history of earth science and the history of astrophysics were also closely tied to military purposes and funding throughout the Cold War. American geodesy, oceanography, and seismology grew from small sub-disciplines in into full-fledged independent disciplines as for several decades, virtually all funding in these fields came from the Department of Defense. A central goal that tied these disciplines together (even while providing the means for intellectual independence) was the figure of the Earth, the model of the earth's geography and gravitation that was essential for accurate ballistic missiles. In the 1960s, geodesy was the superficial goal of the satellite program CORONA, while military reconnaissance was in fact a driving force. Even for geodetic data, new secrecy guidelines worked to restrict collaboration in a field that had formerly been fundamentally international; the Figure of the Earth had geopolitical significance beyond questions of pure geoscience. Still, geodesists were able to retain enough autonomy and subvert secrecy limitations enough to make use of the findings of their military research to overturn some of the fundamental theories of geodesy. [24] Like geodesy and satellite photography research, the advent of radio astronomy had a military purpose hidden beneath official astrophysical research agenda. Quantum electronics permitted both revolutionary new methods of analyzing the universe and—using the same equipment and technology—the monitoring of Soviet electronic signals. [25]
Military interest in (and funding of) seismology, meteorology and oceanography was in some ways a result of the defense-related payoffs of physics and geodesy. The immediate goal of funding in these fields was to detect clandestine nuclear testing and track fallout radiation, a necessary precondition for treaties to limit the nuclear weapon technology earlier military research had created. In particular, the feasibility of monitoring underground nuclear explosions was crucial to the possibility of a comprehensive rather than Partial Nuclear Test Ban Treaty. [26] But the military-funded growth of these disciplines continued even when no pressing military goals were driving them; as with other natural sciences, the military also found value in having ‘scientists on tap' for unforeseen future R&D needs. [27]
The biological sciences were also affected by military funding, but, with the exception of nuclear physics-related medical and genetic research, largely indirectly. The most significant funding sources for basic research before the rise of the military-industrial-academic complex were philanthropic organizations such as the Rockefeller Foundation. After World War II (and to some extent before), the influx of new industrial and military funding opportunities for the physical sciences prompted philanthropies to divest from physics research—most early work in high-energy physics and biophysics had been the product of foundation grants—and refocus on biological and medical research.
The social sciences also found limited military support from the 1940s to the 1960s, but much defense-minded social science research could be—and was—pursued without extensive military funding. In the 1950s, social scientists tried to emulate the interdisciplinary organizational success of the physical sciences' Manhattan Project with the synthetic behavioral science movement. [28] Social scientists actively sought to promote their usefulness to the military, researching topics related to propaganda (put to use in Korea), decision making, the psychological and sociological causes and effects of communism, and a broad constellation of other topics of Cold War significance. By the 1960s, economists and political scientists offered up modernization theory for the cause of Cold War nation-building; modernization theory found a home in the military in the form of Project Camelot, a study of the process of revolution, as well as in the Kennedy administration's approach to the Vietnam War. Project Camelot was ultimately canceled because of the concerns it raised about scientific objectivity in the context of such a politicized research agenda; though natural sciences were not yet susceptible to implications of the corrupting influence of military and political factors, the social sciences were. [29]
Historian Paul Forman, in his seminal 1987 article, proposed that not only had military funding of science greatly expanded the scope and significance of American physics, it also initiated "a qualitative change in its purposes and character." [30] Historians of science were beginning to turn to the Cold War relationship between science and the military for detailed study, and Forman's “distortionist critique” (as Roger Geiger has described it) served to focus the ensuing debates. [31]
Forman and others (e.g., Robert Seidel, Stuart Leslie, and for the history of the social sciences, Ron Robin) view the influx of military money and the focus on applied rather than basic research as having had, at least partially, a negative impact on the course of subsequent research. In turn, critics of the distortionist thesis, beginning with Daniel Kevles, deny that the military "seduced American physicists from, so to speak, a 'true basic physics'." [32] Kevles, as well as Geiger, instead view the effects of military funding relative to such funding simply being absent—rather than put to alternate scientific use. [33]
Most recent scholarship has moved toward a tempered version of Forman's thesis, in which scientists retained significant autonomy despite the radical changes brought about by military funding. [34]
Vannevar Bush was an American engineer, inventor and science administrator, who during World War II headed the U.S. Office of Scientific Research and Development (OSRD), through which almost all wartime military R&D was carried out, including important developments in radar and the initiation and early administration of the Manhattan Project. He emphasized the importance of scientific research to national security and economic well-being, and was chiefly responsible for the movement that led to the creation of the National Science Foundation.
Edward Teller was a Hungarian-American theoretical physicist and chemical engineer who is known colloquially as "the father of the hydrogen bomb" and one of the creators of the Teller–Ulam design based on Stanisław Ulam's design.
Science and technology in the United States has a long history, producing many important figures and developments in the field. The United States of America came into being around the Age of Enlightenment, an era in Western philosophy in which writers and thinkers, rejecting the perceived superstitions of the past, instead chose to emphasize the intellectual, scientific and cultural life, centered upon the 18th century, in which reason was advocated as the primary source for legitimacy and authority. Enlightenment philosophers envisioned a "republic of science," where ideas would be exchanged freely and useful knowledge would improve the lot of all citizens.
Building on major scientific breakthroughs made during the 1930s, the United Kingdom began the world's first nuclear weapons research project, codenamed Tube Alloys, in 1941, during World War II. The United States, in collaboration with the United Kingdom, initiated the Manhattan Project the following year to build a weapon using nuclear fission. The project also involved Canada. In August 1945, the atomic bombings of Hiroshima and Nagasaki were conducted by the United States, with British consent, against Japan at the close of that war, standing to date as the only use of nuclear weapons in hostilities.
Technology played a significant role in World War II. Some of the technologies used during the war were developed during the interwar years of the 1920s and 1930s, much was developed in response to needs and lessons learned during the war, while others were beginning to be developed as the war ended. Many wars have had major effects on the technologies that we use in our daily lives, but World War II had the greatest effect on the technology and devices that are used today. Technology also played a greater role in the conduct of World War II than in any other war in history, and had a critical role in its outcome.
Patrick Maynard Stuart Blackett, Baron Blackett,, was a British experimental physicist known for his work on cloud chambers, cosmic rays, and paleomagnetism, awarded the Nobel Prize in Physics in 1948. In 1925 he became the first person to prove that radioactivity could cause the nuclear transmutation of one chemical element to another. He also made a major contribution in World War II advising on military strategy and developing operational research. His views saw an outlet in third world development and in influencing policy in the Labour government of the 1960s.
Abdul Qadeer Khan,, known as A. Q. Khan, was a Pakistani nuclear physicist and metallurgical engineer who is colloquially known as the "father of Pakistan's atomic weapons program".
Big science is a term used by scientists and historians of science to describe a series of changes in science which occurred in industrial nations during and after World War II, as scientific progress increasingly came to rely on large-scale projects usually funded by national governments or groups of governments. Individual or small group efforts, or small science, are still relevant today as theoretical results by individual authors may have a significant impact, but very often the empirical verification requires experiments using constructions, such as the Large Hadron Collider, costing between $5 and $10 billion.
Raja Ramanna was an Indian physicist. He was the director of India's nuclear program in the late 1960s and early 1970s, which culminated in Smiling Buddha, India's first successful nuclear weapon test on 18 May 1974.
The Soviet atomic bomb project was the classified research and development program that was authorized by Joseph Stalin in the Soviet Union to develop nuclear weapons during and after World War II.
During World War II, Japan had several programs exploring the use of nuclear fission for military technology, including nuclear reactors and nuclear weapons. Like the similar wartime programs in Nazi Germany, it was relatively small, suffered from an array of problems brought on by lack of resources and wartime disarray, and was ultimately unable to progress beyond the laboratory stage during the war.
Carl Friedrich Freiherr von Weizsäcker was a German physicist and philosopher. He was the longest-living member of the team which performed nuclear research in Nazi Germany during the Second World War, under Werner Heisenberg's leadership. There is ongoing debate as to whether or not he and the other members of the team actively and willingly pursued the development of a nuclear bomb for Germany during this time.
Nazi Germany undertook several research programs relating to nuclear technology, including nuclear weapons and nuclear reactors, before and during World War II. These were variously called Uranverein or Uranprojekt. The first effort started in April 1939, just months after the discovery of nuclear fission in Berlin in December 1938, but ended only a few months later, shortly ahead of the September 1939 German invasion of Poland, for which many notable German physicists were drafted into the Wehrmacht. A second effort under the administrative purview of the Wehrmacht's Heereswaffenamt began on September 1, 1939, the day of the invasion of Poland. The program eventually expanded into three main efforts: Uranmaschine development, uranium and heavy water production, and uranium isotope separation. Eventually, the German military determined that nuclear fission would not contribute significantly to the war, and in January 1942 the Heereswaffenamt turned the program over to the Reich Research Council while continuing to fund the activity.
The National Defense Research Committee (NDRC) was an organization created "to coordinate, supervise, and conduct scientific research on the problems underlying the development, production, and use of mechanisms and devices of warfare" in the United States from June 27, 1940, until June 28, 1941. Most of its work was done with the strictest secrecy, and it began research of what would become some of the most important technology during World War II, including radar and the atomic bomb. It was superseded by the Office of Scientific Research and Development in 1941, and reduced to merely an advisory organization until it was eventually terminated during 1947.
The Dr. A. Q. Khan Research Laboratories, is a federally funded research and development laboratory located in Kahuta at a short distance from Rawalpindi in Punjab, Pakistan. Established in 1976, the laboratory is best known for its central role in Pakistan's nuclear weapons program and its understanding the nuclear science.
Erich Schumann was a German physicist who specialized in acoustics and explosives, and had a penchant for music. He was a general officer in the army and a professor at the University of Berlin and the Technische Hochschule Berlin. When Adolf Hitler came to power he joined the Nazi Party. During World War II, his positions in the Army Ordnance Office and the Army High Command made him one of the most powerful and influential physicists in Germany. He ran the German nuclear energy program from 1939 to 1942, when the army relinquished control to the Reich Research Council. His role in the project was obfuscated after the war by the German physics community's defense of its conduct during the war. The publication of his book on the military's role in the project was not allowed by the British occupation authorities. He was director of the Helmholtz Institute of Sound Psychology and Medical Acoustics.
Paul Forman is a historian of science and is the retired curator of the Division of Medicine and Science at the National Museum of American History. Forman's primary research focus has been the history of physics, in which he has helped pioneer the application of cultural history to scientific developments.
Munir Ahmad Khan, NI, HI, FPAS, was a Pakistani nuclear reactor physicist who is credited, among others, with being the "father of the atomic bomb program" of Pakistan for their leading role in developing their nation's nuclear weapons during the successive years after the war with India in 1971.
Project-706, also known as Project-786 was the codename of a research and development program to develop Pakistan's first nuclear weapons. The program was initiated by Prime Minister Zulfiqar Ali Bhutto in 1974 in response to the Indian nuclear tests conducted in May 1974. During the course of this program, Pakistani nuclear scientists and engineers developed the requisite nuclear infrastructure and gained expertise in the extraction, refining, processing and handling of fissile material with the ultimate goal of designing a nuclear device. These objectives were achieved by the early 1980s with the first successful cold test of a Pakistani nuclear device in 1983. The two institutions responsible for the execution of the program were the Pakistan Atomic Energy Commission and the Kahuta Research Laboratories, led by Munir Ahmed Khan and Abdul Qadeer Khan respectively. In 1976 an organization called Special Development Works (SDW) was created within the Pakistan Army, directly under the Chief of the Army Staff (Pakistan) (COAS). This organization worked closely with PAEC and KRL to secretly prepare the nuclear test sites in Baluchistan and other required civil infrastructure.
The meson bomb was a proposed nuclear weapon that would derive its destructive force from meson interactions with fissionable material like uranium. The idea behind the bomb was rejected by most scientists, but during the Cold War, American intelligence managed to trick the Soviet Union into conducting research on this topic, which resulted in several years of wasted labor by one of the Soviet nuclear weapon research bureaus.