A New Kind of Science

Last updated
A New Kind of Science
A new kind of science.PNG
Author Stephen Wolfram
LanguageEnglish
Subject Complex systems
GenreNon-fiction
Publisher Wolfram Media
Publication date
2002
Publication placeUnited States
Media typePrint
Pages1197 (Hardcover)
ISBN 1-57955-008-8
OCLC 856779719
Website A New Kind of Science, online

A New Kind of Science is a book by Stephen Wolfram, [1] published by his company Wolfram Research under the imprint Wolfram Media in 2002. It contains an empirical and systematic study of computational systems such as cellular automata. Wolfram calls these systems simple programs and argues that the scientific philosophy and methods appropriate for the study of simple programs are relevant to other fields of science.

Contents

Contents

Computation and its implications

The thesis of A New Kind of Science (NKS) is twofold: that the nature of computation must be explored experimentally, and that the results of these experiments have great relevance to understanding the physical world. [2]

Simple programs

The basic subject of Wolfram's "new kind of science" is the study of simple abstract rules—essentially, elementary computer programs. In almost any class of a computational system, one very quickly finds instances of great complexity among its simplest cases (after a time series of multiple iterative loops, applying the same simple set of rules on itself, similar to a self-reinforcing cycle using a set of rules). This seems to be true regardless of the components of the system and the details of its setup. Systems explored in the book include, amongst others, cellular automata in one, two, and three dimensions; mobile automata; Turing machines in 1 and 2 dimensions; several varieties of substitution and network systems; recursive functions; nested recursive functions; combinators; tag systems; register machines; reversal-addition. For a program to qualify as simple, there are several requirements:

  1. Its operation can be completely explained by a simple graphical illustration.
  2. It can be completely explained in a few sentences of human language.
  3. It can be implemented in a computer language using just a few lines of code.
  4. The number of its possible variations is small enough so that all of them can be computed.

Generally, simple programs tend to have a very simple abstract framework. Simple cellular automata, Turing machines, and combinators are examples of such frameworks, while more complex cellular automata do not necessarily qualify as simple programs. It is also possible to invent new frameworks, particularly to capture the operation of natural systems. The remarkable feature of simple programs is that a significant percentage of them are capable of producing great complexity. Simply enumerating all possible variations of almost any class of programs quickly leads one to examples that do unexpected and interesting things. This leads to the question: if the program is so simple, where does the complexity come from? In a sense, there is not enough room in the program's definition to directly encode all the things the program can do. Therefore, simple programs can be seen as a minimal example of emergence. A logical deduction from this phenomenon is that if the details of the program's rules have little direct relationship to its behavior, then it is very difficult to directly engineer a simple program to perform a specific behavior. An alternative approach is to try to engineer a simple overall computational framework, and then do a brute-force search through all of the possible components for the best match.

Simple programs are capable of a remarkable range of behavior. Some have been proven to be universal computers. Others exhibit properties familiar from traditional science, such as thermodynamic behavior, continuum behavior, conserved quantities, percolation, sensitive dependence on initial conditions, and others. They have been used as models of traffic, material fracture, crystal growth, biological growth, and various sociological, geological, and ecological phenomena. Another feature of simple programs is that, according to the book, making them more complicated seems to have little effect on their overall complexity. A New Kind of Science argues that this is evidence that simple programs are enough to capture the essence of almost any complex system.

Mapping and mining the computational universe

In order to study simple rules and their often-complex behaviour, Wolfram argues that it is necessary to systematically explore all of these computational systems and document what they do. He further argues that this study should become a new branch of science, like physics or chemistry. The basic goal of this field is to understand and characterize the computational universe using experimental methods.

The proposed new branch of scientific exploration admits many different forms of scientific production. For instance, qualitative classifications are often the results of initial forays into the computational jungle. On the other hand, explicit proofs that certain systems compute this or that function are also admissible. There are also some forms of production that are in some ways unique to this field of study. For example, the discovery of computational mechanisms that emerge in different systems but in bizarrely different forms.

Another type of production involves the creation of programs for the analysis of computational systems. In the NKS framework, these themselves should be simple programs, and subject to the same goals and methodology. An extension of this idea is that the human mind is itself a computational system, and hence providing it with raw data in as effective a way as possible is crucial to research. Wolfram believes that programs and their analysis should be visualized as directly as possible, and exhaustively examined by the thousands or more. Since this new field concerns abstract rules, it can in principle address issues relevant to other fields of science. However, in general Wolfram's idea is that novel ideas and mechanisms can be discovered in the computational universe, where they can be represented in their simplest forms, and then other fields can choose among these discoveries for those they find relevant.

Systematic abstract science

While Wolfram advocates simple programs as a scientific discipline, he also argues that its methodology will revolutionize other fields of science. The basis of his argument is that the study of simple programs is the minimal possible form of science, grounded equally in both abstraction and empirical experimentation. Every aspect of the methodology advocated in NKS is optimized to make experimentation as direct, easy, and meaningful as possible while maximizing the chances that the experiment will do something unexpected. Just as this methodology allows computational mechanisms to be studied in their simplest forms, Wolfram argues that the process of doing so engages with the mathematical basis of the physical world, and therefore has much to offer the sciences.

Wolfram argues that the computational realities of the universe make science hard for fundamental reasons. But he also argues that by understanding the importance of these realities, we can learn to use them in our favor. For instance, instead of reverse engineering our theories from observation, we can enumerate systems and then try to match them to the behaviors we observe. A major theme of NKS is investigating the structure of the possibility space. Wolfram argues that science is far too ad hoc, in part because the models used are too complicated and unnecessarily organized around the limited primitives of traditional mathematics. Wolfram advocates using models whose variations are enumerable and whose consequences are straightforward to compute and analyze.

Philosophical underpinnings

Computational irreducibility

Wolfram argues that one of his achievements is in providing a coherent system of ideas that justifies computation as an organizing principle of science. For instance, he argues that the concept of computational irreducibility (that some complex computations are not amenable to short-cuts and cannot be "reduced"), is ultimately the reason why computational models of nature must be considered in addition to traditional mathematical models. Likewise, his idea of intrinsic randomness generation—that natural systems can generate their own randomness, rather than using chaos theory or stochastic perturbations—implies that computational models do not need to include explicit randomness.

Principle of computational equivalence

Based on his experimental results, Wolfram developed the principle of computational equivalence (PCE): the principle states that systems found in the natural world can perform computations up to a maximal ("universal") level of computational power. Most systems can attain this level. Systems, in principle, compute the same things as a computer. Computation is therefore simply a question of translating input and outputs from one system to another. Consequently, most systems are computationally equivalent. Proposed examples of such systems are the workings of the human brain and the evolution of weather systems.

The principle can be restated as follows: almost all processes that are not obviously simple are of equivalent sophistication. From this principle, Wolfram draws an array of concrete deductions which he argues reinforce his theory. Possibly the most important among these is an explanation as to why we experience randomness and complexity: often, the systems we analyze are just as sophisticated as we are. Thus, complexity is not a special quality of systems, like for instance the concept of "heat," but simply a label for all systems whose computations are sophisticated. Wolfram argues that understanding this makes possible the "normal science" of the NKS paradigm.

Applications and results

There are a number of specific results and ideas in the NKS book, and they can be organized into several themes. One common theme of examples and applications is demonstrating how little complexity it takes to achieve interesting behavior, and how the proper methodology can discover this behavior.

First, there are several cases where the NKS book introduces what was, during the book's composition, the simplest known system in some class that has a particular characteristic. Some examples include the first primitive recursive function that results in complexity, the smallest universal Turing machine, and the shortest axiom for propositional calculus. In a similar vein, Wolfram also demonstrates many simple programs that exhibit phenomena like phase transitions, conserved quantities, continuum behavior, and thermodynamics that are familiar from traditional science. Simple computational models of natural systems like shell growth, fluid turbulence, and phyllotaxis are a final category of applications that fall in this theme.

Another common theme is taking facts about the computational universe as a whole and using them to reason about fields in a holistic way. For instance, Wolfram discusses how facts about the computational universe inform evolutionary theory, SETI, free will, computational complexity theory, and philosophical fields like ontology, epistemology, and even postmodernism.

Wolfram suggests that the theory of computational irreducibility may provide a resolution to the existence of free will in a nominally deterministic universe. He posits that the computational process in the brain of the being with free will is actually complex enough so that it cannot be captured in a simpler computation, due to the principle of computational irreducibility. Thus, while the process is indeed deterministic, there is no better way to determine the being's will than, in essence, to run the experiment and let the being exercise it.

The book also contains a number of individual results—both experimental and analytic—about what a particular automaton computes, or what its characteristics are, using some methods of analysis.

The book contains a new technical result in describing the Turing completeness of the Rule 110 cellular automaton. Very small Turing machines can simulate Rule 110, which Wolfram demonstrates using a 2-state 5-symbol universal Turing machine. Wolfram conjectures that a particular 2-state 3-symbol Turing machine is universal. In 2007, as part of commemorating the book's fifth anniversary, Wolfram's company offered a $25,000 prize for proof that this Turing machine is universal. [3] Alex Smith, a computer science student from Birmingham, UK, won the prize later that year by proving Wolfram's conjecture. [4] [5]

Reception

Periodicals gave A New Kind of Science coverage, including articles in The New York Times , [6] Newsweek , [7] Wired , [8] and The Economist . [9] Some scientists[ who? ] criticized the book as abrasive and arrogant, and perceived a fatal flaw—that simple systems such as cellular automata are not complex enough to describe the degree of complexity present in evolved systems, and observed that Wolfram ignored the research categorizing the complexity of systems. Although critics accept Wolfram's result showing universal computation, they view it as minor and dispute Wolfram's claim of a paradigm shift. Others found that the work contained valuable insights and refreshing ideas. [10] [11] Wolfram addressed his critics in a series of blog posts. [12] [13]

Scientific philosophy

A tenet of NKS is that the simpler the system, the more likely a version of it will recur in a wide variety of more complicated contexts. Therefore, NKS argues that systematically exploring the space of simple programs will lead to a base of reusable knowledge. However, many scientists believe that of all possible parameters, only some actually occur in the universe. For instance, of all possible permutations of the symbols making up an equation, most will be essentially meaningless. NKS has also been criticized for asserting that the behavior of simple systems is somehow representative of all systems.

Methodology

A common criticism of NKS is that it does not follow established scientific methodology. For instance, NKS does not establish rigorous mathematical definitions, [14] nor does it attempt to prove theorems; and most formulas and equations are written in Mathematica rather than standard notation. [15] Along these lines, NKS has also been criticized for being heavily visual, with much information conveyed by pictures that do not have formal meaning. [11] It has also been criticized for not using modern research in the field of complexity, particularly the works that have studied complexity from a rigorous mathematical perspective. And it has been criticized for misrepresenting chaos theory.

Utility

NKS has been criticized for not providing specific results that would be immediately applicable to ongoing scientific research. [11] There has also been criticism, implicit and explicit, that the study of simple programs has little connection to the physical universe, and hence is of limited value. Steven Weinberg has pointed out that no real world system has been explained using Wolfram's methods in a satisfactory fashion. [16] Mathematician Steven G. Krantz wrote, "Just because Wolfram can cook up a cellular automaton that seems to produce the spot pattern on a leopard, may we safely conclude that he understands the mechanism by which the spots are produced on the leopard, or why the spots are there, or what function (evolutionary or mating or camouflage or other) they perform?" [17]

Principle of computational equivalence (PCE)

The principle of computational equivalence (PCE) has been criticized for being vague, unmathematical, and for not making directly verifiable predictions. [15] It has also been criticized for being contrary to the spirit of research in mathematical logic and computational complexity theory, which seek to make fine-grained distinctions between levels of computational sophistication, and for wrongly conflating different kinds of universality property. [15] Moreover, critics such as Ray Kurzweil have argued that it ignores the distinction between hardware and software; while two computers may be equivalent in power, it does not follow that any two programs they might run are also equivalent. [18] Others suggest it is little more than a rechristening of the Church–Turing thesis.

The fundamental theory (NKS Chapter 9)

Wolfram's speculations of a direction towards a fundamental theory of physics have been criticized as vague and obsolete. Scott Aaronson, Professor of Computer Science at University of Texas Austin, also claims that Wolfram's methods cannot be compatible with both special relativity and Bell's theorem violations, and hence cannot explain the observed results of Bell tests. [19]

Edward Fredkin and Konrad Zuse pioneered the idea of a computable universe, the former by writing a line in his book on how the world might be like a cellular automaton, and later further developed by Fredkin using a toy model called Salt. [20] It has been claimed that NKS tries to take these ideas as its own, but Wolfram's model of the universe is a rewriting network, and not a cellular automaton, as Wolfram himself has suggested a cellular automaton cannot account for relativistic features such as no absolute time frame. [21] Jürgen Schmidhuber has also charged that his work on Turing machine-computable physics was stolen without attribution, namely his idea on enumerating possible Turing-computable universes. [22]

In a 2002 review of NKS, the Nobel laureate and elementary particle physicist Steven Weinberg wrote, "Wolfram himself is a lapsed elementary particle physicist, and I suppose he can't resist trying to apply his experience with digital computer programs to the laws of nature. This has led him to the view (also considered in a 1981 paper by Richard Feynman) that nature is discrete rather than continuous. He suggests that space consists of a set of isolated points, like cells in a cellular automaton, and that even time flows in discrete steps. Following an idea of Edward Fredkin, he concludes that the universe itself would then be an automaton, like a giant computer. It's possible, but I can't see any motivation for these speculations, except that this is the sort of system that Wolfram and others have become used to in their work on computers. So might a carpenter, looking at the moon, suppose that it is made of wood." [23]

Natural selection

Wolfram's claim that natural selection is not the fundamental cause of complexity in biology has led journalist Chris Lavers to state that Wolfram does not understand the theory of evolution. [24]

Originality

NKS has been heavily criticized as not being original or important enough to justify its title and claims.

The authoritative manner in which NKS presents a vast number of examples and arguments has been criticized as leading the reader to believe that each of these ideas was original to Wolfram; in particular, one of the most substantial new technical results presented in the book, that the rule 110 cellular automaton is Turing complete, was not proven by Wolfram. Wolfram credits the proof to his research assistant, Matthew Cook. [25] However, the notes section at the end of his book acknowledges many of the discoveries made by these other scientists citing their names together with historical facts, although not in the form of a traditional bibliography section. Additionally, the idea that very simple rules often generate great complexity is already an established idea in science, particularly in chaos theory and complex systems.

See also

Related Research Articles

In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree. The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".

<span class="mw-page-title-main">Turing machine</span> Computation model defining an abstract machine

A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algorithm.

In computability theory, a system of data-manipulation rules is said to be Turing-complete or computationally universal if it can be used to simulate any Turing machine. This means that this system is able to recognize or decode other data-manipulation rule sets. Turing completeness is used as a way to express the power of such a data-manipulation rule set. Virtually all programming languages today are Turing-complete.

<span class="mw-page-title-main">Conway's Game of Life</span> Two-dimensional cellular automaton

The Game of Life, also known simply as Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970. It is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves. It is Turing complete and can simulate a universal constructor or any other Turing machine.

<span class="mw-page-title-main">Stephen Wolfram</span> British-American scientist (born 1959)

Stephen Wolfram is a British-American computer scientist, physicist, and businessman. He is known for his work in computer algebra, and theoretical physics. In 2012, he was named a fellow of the American Mathematical Society.

<span class="mw-page-title-main">Cellular automaton</span> Discrete model studied in computer science

A cellular automaton is a discrete model of computation studied in automata theory. Cellular automata are also called cellular spaces, tessellation automata, homogeneous structures, cellular structures, tessellation structures, and iterative arrays. Cellular automata have found application in various areas, including physics, theoretical biology and microstructure modeling.

In computer science, a universal Turing machine (UTM) is a Turing machine capable of computing any computable sequence, as described by Alan Turing in his seminal paper "On Computable Numbers, with an Application to the Entscheidungsproblem". Common sense might say that a universal machine is impossible, but Turing proves that it is possible. He suggested that we may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions ; which will be called "m-configurations". He then described the operation of such machine, as described below, and argued:

It is my contention that these operations include all those which are used in the computation of a number.

<span class="mw-page-title-main">Automata theory</span> Study of abstract machines and automata

Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science with close connections to mathematical logic. The word automata comes from the Greek word αὐτόματος, which means "self-acting, self-willed, self-moving". An automaton is an abstract self-propelled computing device which follows a predetermined sequence of operations automatically. An automaton with a finite number of states is called a finite automaton (FA) or finite-state machine (FSM). The figure on the right illustrates a finite-state machine, which is a well-known type of automaton. This automaton consists of states and transitions. As the automaton sees a symbol of input, it makes a transition to another state, according to its transition function, which takes the previous state and current input symbol as its arguments.

Matthew Cook is a mathematician and computer scientist who is best known for having proved Stephen Wolfram's conjecture that the Rule 110 cellular automaton is Turing-complete.

In the theory of computation, a tag system is a deterministic model of computation published by Emil Leon Post in 1943 as a simple form of a Post canonical system. A tag system may also be viewed as an abstract machine, called a Post tag machine —briefly, a finite-state machine whose only tape is a FIFO queue of unbounded length, such that in each transition the machine reads the symbol at the head of the queue, deletes a constant number of symbols from the head, and appends to the tail a symbol-string that depends solely on the first symbol read in this transition.

Computability is the ability to solve a problem in an effective manner. It is a key topic of the field of computability theory within mathematical logic and the theory of computation within computer science. The computability of a problem is closely linked to the existence of an algorithm to solve the problem.

<span class="mw-page-title-main">Rule 110</span> Elementary cellular automaton

The Rule 110 cellular automaton is an elementary cellular automaton with interesting behavior on the boundary between stability and chaos. In this respect, it is similar to Conway's Game of Life. Like Life, Rule 110 with a particular repeating background pattern is known to be Turing complete. This implies that, in principle, any calculation or computer program can be simulated using this automaton.

Reversible computing is any model of computation where the computational process, to some extent, is time-reversible. In a model of computation that uses deterministic transitions from one state of the abstract machine to another, a necessary condition for reversibility is that the relation of the mapping from states to their successors must be one-to-one. Reversible computing is a form of unconventional computing.

Humans have considered and tried to create non-biological life for at least 3,000 years. As seen in tales ranging from Pygmalion to Frankenstein, humanity has long been intrigued by the concept of artificial life.

<span class="mw-page-title-main">Von Neumann universal constructor</span> Self-replicating cellular automaton

John von Neumann's universal constructor is a self-replicating machine in a cellular automaton (CA) environment. It was designed in the 1940s, without the use of a computer. The fundamental details of the machine were published in von Neumann's book Theory of Self-Reproducing Automata, completed in 1966 by Arthur W. Burks after von Neumann's death. It is regarded as foundational for automata theory, complex systems, and artificial life. Indeed, Nobel Laureate Sydney Brenner considered Von Neumann's work on self-reproducing automata central to biological theory as well, allowing us to "discipline our thoughts about machines, both natural and artificial."

In his book A New Kind of Science, Stephen Wolfram described a universal 2-state 5-symbol Turing machine, and conjectured that a particular 2-state 3-symbol Turing machine might be universal as well.

<span class="mw-page-title-main">Elementary cellular automaton</span> Mathematics concept

In mathematics and computability theory, an elementary cellular automaton is a one-dimensional cellular automaton where there are two possible states and the rule to determine the state of a cell in the next generation depends only on the current state of the cell and its two immediate neighbors. There is an elementary cellular automaton which is capable of universal computation, and as such it is one of the simplest possible models of computation.

Natural computing, also called natural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials to compute. The main fields of research that compose these three branches are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, artificial life, DNA computing, and quantum computing, among others.

Norman H. Margolus is a Canadian-American physicist and computer scientist, known for his work on cellular automata and reversible computing. He is a research affiliate with the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology.

<span class="mw-page-title-main">Reversible cellular automaton</span> Cellular automaton that can be run backwards

A reversible cellular automaton is a cellular automaton in which every configuration has a unique predecessor. That is, it is a regular grid of cells, each containing a state drawn from a finite set of states, with a rule for updating all cells simultaneously based on the states of their neighbors, such that the previous state of any cell before an update can be determined uniquely from the updated states of all the cells. The time-reversed dynamics of a reversible cellular automaton can always be described by another cellular automaton rule, possibly on a much larger neighborhood.

References

  1. Rosen, Judith (2003). "Weighing Wolfram's 'New Kind of Science'". Publishers Weekly.
  2. The World According to Wolfram
  3. "The Wolfram 2,3 Turing Machine Research Prize". Archived from the original on 15 May 2011. Retrieved 2011-03-31.
  4. "The Wolfram 2,3 Turing Machine Is Universal!" . Retrieved 2007-10-24.
  5. "Technical Commentary [on Wolfram 2,3 Turing machine universality proof]" . Retrieved 2007-10-24.
  6. Johnson, George (9 June 2002). "'A New Kind of Science': You Know That Space-Time Thing? Never Mind". The New York Times. Retrieved 28 May 2009.
  7. Levy, Stephen (27 May 2002). "Great Minds, Great Ideas". Newsweek. Retrieved 28 May 2009.
  8. Levy, Stephen (June 2002). "The Man Who Cracked The Code to Everything ..." Wired. Archived from the original on 27 May 2009. Retrieved 28 May 2009.
  9. "The science of everything". The Economist. 30 May 2002. Retrieved 28 May 2009.
  10. Rucker, Rudy (November 2003). "Review: A New Kind of Science" (PDF). American Mathematical Monthly . 110 (9): 851–61. doi:10.2307/3647819. JSTOR   3647819. Archived (PDF) from the original on 2004-03-28. Retrieved 28 May 2009.
  11. 1 2 3 Berry, Michael; Ellis, John; Deutch, David (15 May 2002). "A Revolution or self indulgent hype? How top scientists view Wolfram" (PDF). The Daily Telegraph . Archived (PDF) from the original on 2012-05-19. Retrieved 14 August 2012.
  12. Wolfram, Stephen (7 May 2012). "It's Been 10 Years: What's Happened with A New Kind of Science?". Stephen Wolfram Blog. Retrieved 14 August 2012.
  13. Wolfram, Stephen (12 May 2012). "Living a Paradigm Shift: Looking Back on Reactions to A New Kind of Science". Stephen Wolfram Blog. Retrieved 14 August 2012.
  14. Bailey, David (September 2002). "A Reclusive Kind of Science" (PDF). Computing in Science and Engineering: 79–81. Retrieved 20 March 2021.
  15. 1 2 3 Gray, Lawrence (2003). "A Mathematician Looks at Wolfram's New Kind of Science" (PDF). Notices of the AMS. 50 (2): 200–211. Archived (PDF) from the original on 2003-03-08.
  16. Weiss, Peter (2003). "In search of a scientific revolution: controversial genius Stephen Wolfram presses onward". Science News.
  17. Krantz, Steven G. (2003). "Review of A New Kind of Science" (PDF). Bulletin of the American Mathematical Society. 40 (1): 143–150. doi:10.1090/S0273-0979-02-00970-9. Archived (PDF) from the original on 2012-03-17.
  18. Kurzweil, Ray (13 May 2002). "Reflections on Stephen Wolfram's A New Kind of Science". Kurzweil Accelerating Intelligence Blog.
  19. Aaronson, Scott (2002). "Book Review of A New Kind of Science (Postscript file)". Quantum Information and Computation. 2 (5): 410–423. doi:10.26421/QIC2.5-7.
  20. "ZUSE-FREDKIN-THESIS". usf.edu.
  21. "Fundamental Physics: A New Kind of Science | Online by Stephen Wolfram".
  22. Schmidhuber, Jürgen. "Origin of main ideas in Wolfram's book "A New Kind of Science"". CERN Courier.
  23. Weinberg, S. (24 October 2002). "Is the Universe a Computer?". The New York Review of Books. 49 (16).
  24. Lavers, Chris (3 August 2002). "How the cheetah got his spots". The Guardian. London. Retrieved 28 May 2009.
  25. "Note (C) for the Rule 110 Cellular Automaton: A New Kind of Science | Online by Stephen Wolfram [Page 1115]".