Lateral computing

Last updated

Lateral computing is a lateral thinking approach to solving computing problems. Lateral thinking has been made popular by Edward de Bono. [1] This thinking technique is applied to generate creative ideas and solve problems. Similarly, by applying lateral-computing techniques to a problem, it can become much easier to arrive at a computationally inexpensive, easy to implement, efficient, innovative or unconventional solution.

Contents

The traditional or conventional approach to solving computing problems is to either build mathematical models or have an IF- THEN -ELSE structure. For example, a brute-force search is used in many chess engines, [2] but this approach is computationally expensive and sometimes may arrive at poor solutions. It is for problems like this that lateral computing can be useful to form a better solution.

A simple problem of truck backup can be used for illustrating lateral-computing[ clarification needed ].[ citation needed ] This is one of the difficult tasks for traditional computing techniques, and has been efficiently solved by the use of fuzzy logic (which is a lateral computing technique).[ citation needed ] Lateral-computing sometimes arrives at a novel solution for particular computing problem by using the model of how living beings, such as how humans, ants, and honeybees, solve a problem; how pure crystals are formed by annealing, or evolution of living beings or quantum mechanics etc.[ clarification needed ]

From lateral-thinking to lateral-computing

Lateral thinking is technique for creative thinking for solving problems. [1] The brain as center of thinking has a self-organizing information system. It tends to create patterns and traditional thinking process uses them to solve problems. The lateral thinking technique proposes to escape from this patterning to arrive at better solutions through new ideas. Provocative use of information processing is the basic underlying principle of lateral thinking,

The provocative operator (PO) is something which characterizes lateral thinking. Its function is to generate new ideas by provocation and providing escape route from old ideas. It creates a provisional arrangement of information.

Water logic is contrast to traditional or rock logic. [3] Water logic has boundaries which depends on circumstances and conditions while rock logic has hard boundaries. Water logic, in someways, resembles fuzzy logic.

Transition to lateral-computing

Lateral computing does a provocative use of information processing similar to lateral-thinking. This is explained with the use of evolutionary computing which is a very useful lateral-computing technique. The evolution proceeds by change and selection. While random mutation provides change, the selection is through survival of the fittest. The random mutation works as a provocative information processing and provides a new avenue for generating better solutions for the computing problem. The term "Lateral Computing" was first proposed by Prof CR SUTHIKSHN Kumar and First World Congress on Lateral Computing WCLC 2004 was organized with international participants during December 2004.

Lateral computing takes the analogies from real-world examples such as:

Differentiating factors of "lateral computing":

Convention – lateral

It is very hard to draw a clear boundary between conventional and lateral computing. Over a period of time, some unconventional computing techniques become integral part of mainstream computing. So there will always be an overlap between conventional and lateral computing. It will be tough task classifying a computing technique as a conventional or lateral computing technique as shown in the figure. The boundaries are fuzzy and one may approach with fuzzy sets.

Formal definition

Lateral computing is a fuzzy set of all computing techniques which use unconventional computing approach. Hence Lateral computing includes those techniques which use semi-conventional or hybrid computing. The degree of membership for lateral computing techniques is greater than 0 in the fuzzy set of unconventional computing techniques.

The following brings out some important differentiators for lateral computing.

Conventional computing
Lateral computing

Lateral computing and parallel computing

Parallel computing focuses on improving the performance of the computers/algorithms through the use of several computing elements (such as processing elements). [4] The computing speed is improved by using several computing elements. Parallel computing is an extension of conventional sequential computing. However, in lateral computing, the problem is solved using unconventional information processing whether using a sequential or parallel computing.

A review of lateral-computing techniques

There are several computing techniques which fit the Lateral computing paradigm. Here is a brief description of some of the Lateral Computing techniques:

Swarm intelligence

Swarm intelligence (SI) is the property of a system whereby the collective behaviors of (unsophisticated) agents, interacting locally with their environment, cause coherent functional global patterns to emerge.[ clarification needed ] [5] SI provides a basis with which it is possible to explore collective (or distributed) problem solving without centralized control or the provision of a global model.

One interesting swarm intelligent technique is the Ant Colony algorithm: [6]

Agent-based systems

Agents are encapsulated computer systems that are situated in some environment and are capable of flexible, autonomous action in that environment in order to meet their design objectives.[ clarification needed ] [7] Agents are considered to be autonomous (independent, not-controllable), reactive (responding to events), pro-active (initiating actions of their own volition), and social (communicative). Agents vary in their abilities: they can be static or mobile, or may or may not be intelligent. Each agent may have its own task and/or role. Agents, and multi-agent systems, are used as a metaphor to model complex distributed processes. Such agents invariably need to interact with one another in order to manage their inter-dependencies. These interactions involve agents cooperating, negotiating and coordinating with one another.

Agent-based systems are computer programs that try to simulate various complex phenomena via virtual "agents" that represent the components of a business system. The behaviors of these agents are programmed with rules that realistically depict how business is conducted. As widely varied individual agents interact in the model, the simulation shows how their collective behaviors govern the performance of the entire system - for instance, the emergence of a successful product or an optimal schedule. These simulations are powerful strategic tools for "what-if" scenario analysis: as managers change agent characteristics or "rules," the impact of the change can be easily seen in the model output

Grid computing

By analogy, a computational grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities. [8] The applications of grid computing are in:

Autonomic computing

The autonomic nervous system governs our heart rate and body temperature, thus freeing our conscious brain from the burden of dealing with these and many other low-level, yet vital, functions. The essence of autonomic computing is self-management, the intent of which is to free system administrators from the details of system operation and maintenance. [9]

Four aspects of autonomic computing are:

This is a grand challenge promoted by IBM. [10]

Optical computing

Optical computing is to use photons rather than conventional electrons for computing. [11] There are quite a few instances of optical computers and successful use of them.[ clarification needed ] The conventional logic gates use semiconductors, which use electrons for transporting the signals. In case of optical computers, the photons in a light beam are used to do computation.

There are numerous advantages of using optical devices for computing such as immunity to electromagnetic interference, large bandwidth, etc.

DNA computing

DNA computing uses strands of DNA to encode the instance of the problem and to manipulate them using techniques commonly available in any molecular biology laboratory in order to simulate operations that select the solution of the problem if it exists.

Since the DNA molecule is also a code, but is instead made up of a sequence of four bases that pair up in a predictable manner, many scientists have thought about the possibility of creating a molecular computer.[ clarification needed ] These computers rely on the much faster reactions of DNA nucleotides binding with their complements, a brute force method that holds enormous potential for creating a new generation of computers that would be 100 billion times faster than today's fastest PC. DNA computing has been heralded as the "first example of true nanotechnology",[ citation needed ] and even the "start of a new era",[ citation needed ] which forges an unprecedented link between computer science and life science.

Example applications of DNA computing are in solution for the Hamiltonian path problem which is a known NP[ clarification needed ] complete one. The number of required lab operations using DNA grows linearly with the number of vertices of the graph.[ clarification needed ] [12] Molecular algorithms have been reported that solves the cryptographic problem in a polynomial number of steps. As known, factoring large numbers is a relevant problem in many cryptographic applications.

Quantum computing

In a quantum computer, the fundamental unit of information (called a quantum bit or qubit), is not binary but rather more quaternary in nature. [13] [14] This qubit property arises as a direct consequence of its adherence to the laws of quantum mechanics, which differ radically from the laws of classical physics. A qubit can exist not only in a state corresponding to the logical state 0 or 1 as in a classical bit, but also in states corresponding to a blend or quantum superposition of these classical states. In other words, a qubit can exist as a zero, a one, or simultaneously as both 0 and 1, with a numerical coefficient representing the probability for each state. A quantum computer manipulates qubits by executing a series of quantum gates, each a unitary transformation acting on a single qubit or pair of qubits. In applying these gates in succession, a quantum computer can perform a complicated unitary transformation to a set of qubits in some initial state.

Reconfigurable computing

Field-programmable gate arrays (FPGA) are making it possible to build truly reconfigurable computers. [15] The computer architecture is transformed by on the fly reconfiguration of the FPGA circuitry. The optimal matching between architecture and algorithm improves the performance of the reconfigurable computer. The key feature is hardware performance and software flexibility.

For several applications such as fingerprint matching, DNA sequence comparison, etc., reconfigurable computers have been shown to perform several orders of magnitude better than conventional computers. [16]

Simulated annealing

The Simulated annealing algorithm is designed by looking at how the pure crystals form from a heated gaseous state while the system is cooled slowly. [17] The computing problem is redesigned as a simulated annealing exercise and the solutions are arrived at. The working principle of simulated annealing is borrowed from metallurgy: a piece of metal is heated (the atoms are given thermal agitation), and then the metal is left to cool slowly. The slow and regular cooling of the metal allows the atoms to slide progressively their most stable ("minimal energy") positions. (Rapid cooling would have "frozen" them in whatever position they happened to be at that time.) The resulting structure of the metal is stronger and more stable. By simulating the process of annealing inside a computer program, it is possible to find answers to difficult and very complex problems. Instead of minimizing the energy of a block of metal or maximizing its strength, the program minimizes or maximizes some objective relevant to the problem at hand.

Soft computing

One of the main components of "Lateral-computing" is soft computing which approaches problems with human information processing model. [18] The Soft Computing technique comprises Fuzzy logic, neuro-computing, evolutionary-computing, machine learning and probabilistic-chaotic computing.

Neuro computing

Instead of solving a problem by creating a non-linear equation model of it, the biological neural network analogy is used for solving the problem. [19] The neural network is trained like a human brain to solve a given problem. This approach has become highly successful in solving some of the pattern recognition problems.

Evolutionary computing

The genetic algorithm (GA) resembles the natural evolution to provide a universal optimization. [20] Genetic algorithms start with a population of chromosomes which represent the various solutions. The solutions are evaluated using a fitness function and a selection process determines which solutions are to be used for competition process. These algorithms are highly successful in solving search and optimization problems. The new solutions are created using evolutionary principles such as mutation and crossover.

Fuzzy logic

Fuzzy logic is based on the fuzzy sets concepts proposed by Lotfi Zadeh. [21] The degree of membership concept is central to fuzzy sets. The fuzzy sets differ from crisp sets since they allow an element to belong to a set to a degree (degree of membership). This approach finds good applications for control problems. [22] The Fuzzy logic has found enormous applications and has already found a big market presence in consumer electronics such as washing machines, microwaves, mobile phones, Televisions, Camcoders etc.

Probabilistic/chaotic computing

Probabilistic computing engines, e.g. use of probabilistic graphical model such as Bayesian network. Such computational techniques are referred to as randomization, yielding probabilistic algorithms. When interpreted as a physical phenomenon through classical statistical thermodynamics, such techniques lead to energy savings that are proportional to the probability p with which each primitive computational step is guaranteed to be correct (or equivalently to the probability of error, (1–p). [23] Chaotic Computing is based on the chaos theory. [24]

Fractals

Fractal Computing are objects displaying self-similarity at different scales. [25] Fractals generation involves small iterative algorithms. The fractals have dimensions greater than their topological dimensions. The length of the fractal is infinite and size of it cannot be measured. It is described by an iterative algorithm unlike a Euclidean shape which is given by a simple formula. There are several types of fractals and Mandelbrot sets are very popular.

Fractals have found applications in image processing, image compression music generation, computer games etc. Mandelbrot set is a fractal named after its creator. Unlike the other fractals, even though the Mandelbrot set is self-similar at magnified scales, the small scale details are not identical to the whole. I.e., the Mandelbrot set is infinitely complex. But the process of generating it is based on an extremely simple equation. The Mandelbrot set M is a collection of complex numbers. The numbers Z which belong to M are computed by iteratively testing the Mandelbrot equation. C is a constant. If the equation converges for chosen Z, then Z belongs to M. Mandelbrot equation:

Randomized algorithm

A Randomized algorithm makes arbitrary choices during its execution. This allows a savings in execution time at the beginning of a program. The disadvantage of this method is the possibility that an incorrect solution will occur. A well-designed randomized algorithm will have a very high probability of returning a correct answer. [26] The two categories of randomized algorithms are:

Consider an algorithm to find the kth element of an array. A deterministic approach would be to choose a pivot element near the median of the list and partition the list around that element. The randomized approach to this problem would be to choose a pivot at random, thus saving time at the beginning of the process. Like approximation algorithms, they can be used to more quickly solve tough NP-complete problems. An advantage over the approximation algorithms, however, is that a randomized algorithm will eventually yield an exact answer if executed enough times

Machine learning

Human beings/animals learn new skills, languages/concepts. Similarly, machine learning algorithms provide capability to generalize from training data. [27] There are two classes of Machine Learning (ML):

One of the well known machine learning technique is Back Propagation Algorithm. [19] This mimics how humans learn from examples. The training patterns are repeatedly presented to the network. The error is back propagated and the network weights are adjusted using gradient descent. The network converges through several hundreds of iterative computations.

Support vector machines [28]

This is another class of highly successful machine learning techniques successfully applied to tasks such as text classification, speaker recognition, image recognition etc.

Example applications

There are several successful applications of lateral-computing techniques. Here is a small set of applications that illustrates lateral computing:

Summary

Above is a review of lateral-computing techniques. Lateral-computing is based on the lateral-thinking approach and applies unconventional techniques to solve computing problems. While, most of the problems are solved in conventional techniques, there are problems which require lateral-computing. Lateral-computing provides advantage of computational efficiency, low cost of implementation, better solutions when compared to conventional computing for several problems. The lateral-computing successfully tackles a class of problems by exploiting tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness and low solution cost. Lateral-computing techniques which use the human like information processing models have been classified as "Soft Computing" in literature.

Lateral-computing is valuable while solving numerous computing problems whose mathematical models are unavailable.[ citation needed ] They provide a way of developing innovative solutions resulting in smart systems with Very High Machine IQ (VHMIQ). This article has traced the transition from lateral-thinking to lateral-computing. Then several lateral-computing techniques have been described followed by their applications. Lateral-computing is for building new generation artificial intelligence based on unconventional processing.

See also

Related Research Articles

<span class="mw-page-title-main">Quantum computing</span> Computation based on quantum mechanics

A quantum computer is a computer that exploits quantum mechanical phenomena. At small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior using specialized hardware. Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern "classical" computer. In particular, a large-scale quantum computer could break widely-used encryption schemes and aid physicists in performing physical simulations; however, the current state of the art is still largely experimental and impractical.

<span class="mw-page-title-main">Genetic algorithm</span> Competitive algorithm for searching a problem space

In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on biologically inspired operators such as mutation, crossover and selection. Some examples of GA applications include optimizing decision trees for better performance, solving sudoku puzzles, hyperparameter optimization, etc.

Computer science is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.

Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing-computable. Super-Turing computing, introduced in the early 1990's by Hava Siegelmann, refers to such neurological inspired, biological and physical realizable computing; It became the mathematical foundations of Lifelong Machine Learning. Hypercomputation, introduced as a field of science in the late 1990s, is said to be based on the Super Turing but it also includes constructs which are philosophical. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.

<span class="mw-page-title-main">Theoretical computer science</span> Subfield of computer science and mathematics

Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory.

In quantum computing, a quantum algorithm is an algorithm which runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is usually used for those algorithms which seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.

In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, tune, or select a heuristic that may provide a sufficiently good solution to an optimization problem or a machine learning problem, especially with incomplete or imperfect information or limited computation capacity. Metaheuristics sample a subset of solutions which is otherwise too large to be completely enumerated or otherwise explored. Metaheuristics may make relatively few assumptions about the optimization problem being solved and so may be usable for a variety of problems.

The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.

Quantum annealing (QA) is an optimization process for finding the global minimum of a given objective function over a given set of candidate solutions, by a process using quantum fluctuations. Quantum annealing is used mainly for problems where the search space is discrete with many local minima; such as finding the ground state of a spin glass or the traveling salesman problem. The term "quantum annealing" was first proposed in 1988 by B. Apolloni, N. Cesa Bianchi and D. De Falco as a quantum-inspired classical algorithm. It was formulated in its present form by T. Kadowaki and H. Nishimori in "Quantum annealing in the transverse Ising model" though an imaginary-time variant without quantum coherence had been discussed by A. B. Finnila, M. A. Gomez, C. Sebenik and J. D. Doll, in "Quantum annealing is a new method for minimizing multidimensional functions".

<span class="mw-page-title-main">D-Wave Systems</span> Canadian Quantum Computing Company

D-Wave Systems Inc. is a Canadian quantum computing company, based in Burnaby, British Columbia, Canada. D-Wave was the world's first company to sell computers to exploit quantum effects in their operation. D-Wave's early customers include Lockheed Martin, University of Southern California, Google/NASA and Los Alamos National Lab.

Adiabatic quantum computation (AQC) is a form of quantum computing which relies on the adiabatic theorem to do calculations and is closely related to quantum annealing.

Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.

Natural computing, also called natural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials to compute. The main fields of research that compose these three branches are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, artificial life, DNA computing, and quantum computing, among others.

D-Wave Two is the second commercially available quantum computer, and the successor to the first commercially available quantum computer, D-Wave One. Both computers were developed by Canadian company D-Wave Systems. The computers are not general purpose, but rather are designed for quantum annealing. Specifically, the computers are designed to use quantum annealing to solve a single type of problem known as quadratic unconstrained binary optimization. As of 2015, it was still debated whether large-scale entanglement takes place in D-Wave Two, and whether current or future generations of D-Wave computers will have any advantage over classical computers.

<span class="mw-page-title-main">Quantum machine learning</span> Interdisciplinary research area at the intersection of quantum physics and machine learning

Quantum machine learning is the integration of quantum algorithms within machine learning programs. The most common use of the term refers to machine learning algorithms for the analysis of classical data executed on a quantum computer, i.e. quantum-enhanced machine learning. While machine learning algorithms are used to compute immense quantities of data, quantum machine learning utilizes qubits and quantum operations or specialized quantum systems to improve computational speed and data storage done by algorithms in a program. This includes hybrid methods that involve both classical and quantum processing, where computationally difficult subroutines are outsourced to a quantum device. These routines can be more complex in nature and executed faster on a quantum computer. Furthermore, quantum algorithms can be used to analyze quantum states instead of classical data. Beyond quantum computing, the term "quantum machine learning" is also associated with classical machine learning methods applied to data generated from quantum experiments, such as learning the phase transitions of a quantum system or creating new quantum experiments. Quantum machine learning also extends to a branch of research that explores methodological and structural similarities between certain physical systems and learning systems, in particular neural networks. For example, some mathematical and numerical techniques from quantum physics are applicable to classical deep learning and vice versa. Furthermore, researchers investigate more abstract notions of learning theory with respect to quantum information, sometimes referred to as "quantum learning theory".

The DiVincenzo criteria are conditions necessary for constructing a quantum computer, conditions proposed in 2000 by the theoretical physicist David P. DiVincenzo, as being those necessary to construct such a computer—a computer first proposed by mathematician Yuri Manin, in 1980, and physicist Richard Feynman, in 1982—as a means to efficiently simulate quantum systems, such as in solving the quantum many-body problem.

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

In quantum computing, quantum supremacy, quantum primacy or quantum advantage is the goal of demonstrating that a programmable quantum device can solve a problem that no classical computer can solve in any feasible amount of time. Conceptually, quantum supremacy involves both the engineering task of building a powerful quantum computer and the computational-complexity-theoretic task of finding a problem that can be solved by that quantum computer and has a superpolynomial speedup over the best known or possible classical algorithm for that task. The term was coined by John Preskill in 2012, but the concept of a qualitative quantum computational advantage, specifically for simulating quantum systems, dates back to Yuri Manin's (1980) and Richard Feynman's (1981) proposals of quantum computing. Examples of proposals to demonstrate quantum supremacy include the boson sampling proposal of Aaronson and Arkhipov, D-Wave's specialized frustrated cluster loop problems, and sampling the output of random quantum circuits.

In quantum computing, a qubit is a unit of information analogous to a bit in classical computing, but it is affected by quantum mechanical properties such as superposition and entanglement which allow qubits to be in some ways more powerful than classical bits for some tasks. Qubits are used in quantum circuits and quantum algorithms composed of quantum logic gates to solve computational problems, where they are used for input/output and intermediate computations.

References

  1. 1 2 de Bono, E. (1990). Lateral Thinking for Management: A Handbook. Penguin Books. ISBN   978-0-07-094233-2.
  2. Hsu, F. H. (2002). Behind Deep Blue: Building the Computer That Defeated the World Chess Champion. Princeton University Press. ISBN   978-0-691-09065-8.
  3. de Bono, E. (1991). Water Logic. Penguin Books. ISBN   978-0-670-84231-5.
  4. Hwang, K. (1993). Advanced Computer Architecture: Parallelism, Scalability, Programmability. McGraw-Hill Book Co., New York. ISBN   978-0-07-031622-5.
  5. Bonabeau, E.; Dorigo, M.; THERAULUZ, G. (1999). Swarm Intelligence: From Natural to Artificial Systems. Oxford University Press. ISBN   978-0-19-513158-1.
  6. Dorigo, M.; DI CARO, G.; Gamberella, L. M. (1999). Ant Algorithms for Discrete Optimization, Artificial Life. MIT Press.
  7. Bradshaw, J. M. (1997). Software Agents. AAAI Press/The MIT Press. ISBN   978-0-262-52234-2.
  8. Foster, Ian (1999). "Computational Grids, Chapter 2". The Grid: Blueprint for a New Computing Infrastructure, Technical Report.
  9. Murch, R. (2004). Autonomic Computing. Pearson Publishers. ISBN   978-0-13-144025-8.
  10. "Autonomic". IBM. 2004.
  11. Karim, M. A.; Awwal, A. A. S. (1992). Optical Computing: An Introduction. Wiley Publishers. ISBN   978-0-471-52886-9.
  12. Pisanti, N. (1997). A Survey of DNA Computing (Technical report). University of Pisa, Italy. TR-97-07.
  13. Braunstein, S. (1999). Quantum Computing. Wiley Publishers. ISBN   978-3-527-40284-7.
  14. Fortnow, L. (July 2003). "Introduction of Quantum Computing from the computer science perspective and reviewing activities". NEC Research and Development. 44 (3): 268–272.
  15. Suthikshn, Kumaryear=1996. Reconfigurable Neurocomputers: Rapid Prototyping and Design Synthesis of Artificial Neural Networks for Field Programmable Gate Arrays (Technical report). University of Melbourne, Australia. PhD Thesis.
  16. Compton and Hauck, 2002
  17. Arts and Krost, 1997
  18. Proc IEEE, 2001
  19. 1 2 Masters, T. (1995). Neural, Novel and Hybrid Algorithm for Time Series Prediction. John Wiley and Sons Publishers.
  20. 1 2 Goldberg, D. E. (2000). Genetic Algorithms in search, optimization and Machine Learning. Addison Wesley Publishers. ISBN   978-0-201-15767-3.
  21. Ross, 1997
  22. 1 2 3 Kosko, B. (1997). Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine Intelligence. Prentice Hall Publishers. ISBN   978-0-13-611435-2.
  23. Palem, 2003
  24. Gleick, 1998
  25. Mandelbrot, 1977
  26. Motwani and Raghavan, 1995
  27. Mitchell, 1997
  28. Joachims, 2002
  29. SUTHIKSHN, KUMAR (June 2003). "Smart Volume Tuner for Cellular Phones". IEEE Wireless Communications Magazine. 11 (4): 44–49. doi:10.1109/MWC.2004.1308949. S2CID   5711655.
  30. Garey and Johnson, 1979
  31. Aarts and Krost, 1997
  32. Koza et al., 2003

Sources

Conferences