Artificial Intelligence System

Last updated
Artificial Intelligence System
Operating system Windows, Linux, macOS
Platform BOINC
Website www.intelligencerealm.com/aisystem/ Archive

Artificial Intelligence System (AIS) was a volunteer computing project undertaken by Intelligence Realm, Inc. with the long-term goal of simulating the human brain in real time, complete with artificial consciousness and artificial general intelligence. They claimed to have found, in research, the "mechanisms of knowledge representation in the brain which is equivalent to finding artificial intelligence", [1] before moving into the developmental phase.

Contents

History

The project's initial goal was recreating the largest brain simulation to date, performed by neuroscientist Eugene M. Izhikevich of The Neurosciences Institute in San Diego, California. Izhikevich simulated 1 second of activity of 100 billion neurons (the estimated number of neurons in the human brain) in 50 days using a cluster of 27 3-gigahertz processors. [2] He extrapolated that a real-time simulation of the brain could not be achieved before 2016. [3] The project aimed to disprove this prediction.

Artificial Intelligence System announced on Sep 5, 2007 that they will use the Berkeley Open Infrastructure for Network Computing (BOINC) software to perform intensive calculations.

On July 12, 2008, the first phase of the project had been completed by reaching the 100 billion neuron mark. [4] The project then continued to simulate neurons while they completed the development of other related applications.

Application description

  1. the application is a brain network test system that reenacts biophysical sensory cells characterized as numerical models and use the Hodgkin–Huxley model to portray the properties of brain cells
  2. the rundown of models will keep developing and will ultimately arrive at many models
  3. the test system gets information from XML records that contain cell properties which portray behavior
  4. the test system will process the framework's way of behaving over the long haul
  5. calculation results will be saved in records [5]

Conclusion

Artificial Intelligence System had successfully simulated over 700 billion neurons by April 2009 and the project reported 7119 participants in January, 2010 [6]

AIS was last seen working on the post data stage before the website was no longer available after November 2010.

See also

Related Research Articles

The Chinese room argument holds that a digital computer executing a program cannot have a "mind", "understanding", or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. Philosopher John Searle presented the argument in his paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. Gottfried Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978) presented similar arguments. Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese room.

<span class="mw-page-title-main">Mind uploading</span> Hypothetical process of digitally emulating a brain

Mind uploading is a speculative process of whole brain emulation in which a brain scan is used to completely emulate the mental state of the individual in a digital computer. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind.

Artificial consciousness (AC), also known as machine consciousness (MC), synthetic consciousness or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience. The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness.

Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.

Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems. The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, transistors, among others. Training software-based neuromorphic systems of spiking neural networks can be achieved using error backpropagation, e.g., using Python based frameworks such as snnTorch, or using canonical learning rules from the biological learning literature, e.g., using BindsNet.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive tasks, as opposed to narrow AI, which is designed for specific tasks. It is one of various definitions of strong AI.

Modelling biological systems is a significant task of systems biology and mathematical biology. Computational systems biology aims to develop and use efficient algorithms, data structures, visualization and communication tools with the goal of computer modelling of biological systems. It involves the use of computer simulations of biological systems, including cellular subsystems, to both analyze and visualize the complex connections of these cellular processes.

An artificial brain is software and hardware with cognitive abilities similar to those of the animal or human brain.

<span class="mw-page-title-main">Wetware computer</span> Computer composed of organic material

A wetware computer is an organic computer composed of organic material "wetware" such as "living" neurons. Wetware computers composed of neurons are different than conventional computers because they use biological materials, and offer the possibility of substantially more energy-efficient computing. While a wetware computer is still largely conceptual, there has been limited success with construction and prototyping, which has acted as a proof of the concept's realistic application to computing in the future. The most notable prototypes have stemmed from the research completed by biological engineer William Ditto during his time at the Georgia Institute of Technology. His work constructing a simple neurocomputer capable of basic addition from leech neurons in 1999 was a significant discovery for the concept. This research was a primary example driving interest in creating these artificially constructed, but still organic brains.

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

Neuroinformatics is the field that combines informatics and neuroscience. Neuroinformatics is related with neuroscience data and information processing by artificial neural networks. There are three main directions where neuroinformatics has to be applied:

The Blue Brain Project is a Swiss brain research initiative that aims to create a digital reconstruction of the mouse brain. The project was founded in May 2005 by the Brain and Mind Institute of École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. Its mission is to use biologically-detailed digital reconstructions and simulations of the mammalian brain to identify the fundamental principles of brain structure and function.

The following outline is provided as an overview of and topical guide to artificial intelligence:

GENESIS is a simulation environment for constructing realistic models of neurobiological systems at many levels of scale including: sub-cellular processes, individual neurons, networks of neurons, and neuronal systems. These simulations are “computer-based implementations of models whose primary objective is to capture what is known of the anatomical structure and physiological characteristics of the neural system of interest”. GENESIS is intended to quantify the physical framework of the nervous system in a way that allows for easy understanding of the physical structure of the nerves in question. “At present only GENESIS allows parallelized modeling of single neurons and networks on multiple-instruction-multiple-data parallel computers.” Development of GENESIS software spread from its home at Caltech to labs at the University of Texas at San Antonio, the University of Antwerp, the National Centre for Biological Sciences in Bangalore, the University of Colorado, the Pittsburgh Supercomputing Center, the San Diego Supercomputer Center, and Emory University.

Brain simulation is the concept of creating a functioning computer model of a brain or part of a brain. Brain simulation projects intend to contribute to a complete understanding of the brain, and eventually also assist the process of treating and diagnosing brain diseases.

<span class="mw-page-title-main">Dharmendra Modha</span> American computer scientist

Dharmendra S. Modha is an Indian American manager and lead researcher of the Cognitive Computing group at IBM Almaden Research Center. He is known for his pioneering works in Artificial Intelligence and Mind Simulation. In November 2009, Modha announced at a supercomputing conference that his team had written a program that simulated a cat brain. He is the recipient of multiple honors, including the Gordon Bell Prize, given each year to recognize outstanding achievement in high-performance computing applications. In November 2012, Modha announced on his blog that using 96 Blue Gene/Q racks of the Lawrence Livermore National Laboratory Sequoia supercomputer, a combined IBM and LBNL team achieved an unprecedented scale of 2.084 billion neurosynaptic cores containing 530 billion neurons and 137 trillion synapses running only 1542× slower than real time. In August 2014 a paper describing the TrueNorth Architecture, "the first-ever production-scale 'neuromorphic' computer chip designed to work more like a mammalian brain than" a processor was published in the journal Science. TrueNorth project culminated in a 64 million neuron system for running deep neural network applications.

<span class="mw-page-title-main">SpiNNaker</span>

SpiNNaker is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group (APT) at the Department of Computer Science, University of Manchester. It is composed of 57,600 processing nodes, each with 18 ARM9 processors and 128 MB of mobile DDR SDRAM, totalling 1,036,800 cores and over 7 TB of RAM. The computing platform is based on spiking neural networks, useful in simulating the human brain.

In computational neuroscience, SUPS or formerly CUPS is a measure of a neuronal network performance, useful in fields of neuroscience, cognitive science, artificial intelligence, and computer science.

<span class="mw-page-title-main">Simon Stringer</span> British mathematician

Simon Stringer is a departmental lecturer, Director of the Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, and Editor-in-Chief of Network: Computation in Neural Systems published by Taylor & Francis.

<span class="mw-page-title-main">Lyle Norman Long</span> Academic and computational scientist

Lyle Norman Long is an academic, and computational scientist. He is a Professor Emeritus of Computational Science, Mathematics, and Engineering at The Pennsylvania State University, and is most known for developing algorithms and software for mathematical models, including neural networks, and robotics. His research has been focused in the fields of computational science, computational neuroscience, cognitive robotics, parallel computing, and software engineering.

References

  1. "Artificial Intelligence System Video". OVGuide.com. Archived from the original on 2011-08-07. Retrieved 2011-06-07.
  2. Eugene Izhikevich (2005-10-27). "Computer Model of the Human Brain". Vesicle.nsi.edu. Archived from the original on 2010-09-18. Retrieved 2011-02-20.
  3. "why did I do that?". Vesicle.nsi.edu. Archived from the original on 2009-12-31. Retrieved 2011-02-20.
  4. "The distributed brain". Information-age.com. 2009-06-17. Archived from the original on 20 July 2011. Retrieved 2011-06-07.
  5. "Project News". 2007-11-13. Archived from the original on 2007-11-13. Retrieved 2022-09-03.
  6. "Neural Network System". 2010-01-27. Archived from the original on 2010-01-27. Retrieved 2022-09-03.