A major contributor to this article appears to have a close connection with its subject.(April 2019) |
Massimiliano Versace | |
---|---|
Born | Italy | 21 December 1972
Nationality | Italian |
Alma mater |
|
Known for | Deep Learning Neural networks NASA SyNAPSE |
Awards | Fulbright Scholar |
Scientific career | |
Fields | Artificial Intelligence Deep Learning |
Institutions | Boston University Neurala |
Thesis | Spikes, synchrony, and attentive learning by laminar thalamocortical circuits (2007) |
Doctoral advisor | Stephen Grossberg |
Website | maxversace |
Massimiliano Versace (born December 21, 1972, in Monfalcone, Italy) is the co-founder and the CEO of Neurala Inc, [1] [2] [3] [4] [5] a Boston-based company building Artificial Intelligence emulating brain function in software and used in automating the process of visual inspection in manufacturing. [6] He is also the founding Director of the Boston University Neuromorphics Lab. [7] Massimiliano Versace is a Fulbright scholar and holds two PhD in Experimental Psychology from the University of Trieste, Italy and Cognitive and Neural Systems from Boston University, USA. He obtained his BSc from the University of Trieste, Italy.
Versace grew up in Monfalcone, Italy and came to the United States in 2001 as a Fulbright scholar. He holds a masters in psychology from the University of Trieste and two PhDs (Experimental Psychology, University of Trieste, Italy—Cognitive and Neural Systems, Boston University, USA). As Artificial Intelligence Professor at Boston University, he founded the Neuromorphics Lab, [8] [7] [9] [10] and in 2009-2011 the lab led a main research thrust in the DARPA SyNAPSE in collaboration with Hewlett-Packard designing artificial nervous systems, based on deep learning, implemented on novel memristor-based devices. In December 2010, Versace published a cover-featured articled on the IEEE Spectrum [11] describing the roadmap to develop a large scale brain model making use of memristor based technologies.
The model designed by Versace and his colleagues, termed Modular Neural Exploring Traveling Agent (MoNETA) [11] was the first large-scale neural network model to implement whole-brain circuits to power a virtual and robotic agent compatibly with memristor-based hardware computations. A cover page article in IEEE Computer [12] features the software platform and modeling implemented by the joint HP and Boston University teams, and the March 2012 edition of IEEE Pulse [13] features his lab work on brain modeling. From 2011 to 2016, Versace and his team at Neurala [14] worked with NASA and successfully built deep learning models able to learn power navigation and perception for exploring novel environments in real-time. [15] [16] [17] [18] [19] [20] [21]
His work has also been featured in TIME Magazines, [22] New York Times, [23] Nasdaq, [24] The Boston Globe, [25] Xconomy, [26] IEEE Spectrum, [27] Fortune, [28] CNBC, [29] The Chicago Tribune, [26] TechCrunch, [30] VentureBeat, [31] Associated Press, [32] Geek Magazine, [33] and is a TEDx [21] speaker.
In 2006, with two colleagues from Boston University, he co-founded Neurala [14] Inc. to bring this technology to market [34] in applications ranging from robots, to drones, and other smart devices. [35] [36] [37] [38]
Versace is a recipient of the Fulbright Fellowship in 2001. Career and company awards include:
Versace is also recipient of the CELEST Award for Computational Modeling of Brain and Behavior in 2009, and was awarded top cited article 2008-2010 in Brain Research .
Massimiliano Versace's pioneered research in continual learning [4] [43] [44] [21] neural networks, in particular applied to cortical models of learning and memory, and how to build intelligent machines equipped with low-power, high density neural chips that implement large-scale brain circuits of increasing complexity. His Synchronous Matching Adaptive Resonance Theory (SMART) model [45] [38] shows spiking laminar cortical circuits self-organize and stably learn relevant information, and how these circuits be embedded in low-power, memristor-based hybrid CMOS chip and used to solve challenging pattern recognition problems. His work has been featured on Fortune, [46] Inc, [47] Tech Crunch, [48] IEEE Spectrum, [49] Venture Beat, [50] among others.
Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics can be controlled by various means including manual, wireless, semi-autonomous, and fully autonomous.
An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network. Artificial neurons are elementary units in an artificial neural network. The artificial neuron receives one or more inputs and sums them to produce an output. Usually each input is separately weighted, and the sum is passed through a non-linear function known as an activation function or transfer function. The transfer functions usually have a sigmoid shape, but they may also take the form of other non-linear functions, piecewise linear functions, or step functions. They are also often monotonically increasing, continuous, differentiable and bounded. Non-monotonic, unbounded and oscillating activation functions with multiple zeros that outperform sigmoidal and ReLU like activation functions on many tasks have also been recently explored. The thresholding function has inspired building logic gates referred to as threshold logic; applicable to building logic circuits resembling brain processing. For example, new devices such as memristors have been extensively used to develop such logic in recent times.
Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems. The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, transistors, among others. Training software-based neuromorphic systems of spiking neural networks can be achieved using error backpropagation, e.g., using Python based frameworks such as snnTorch, or using canonical learning rules from the biological learning literature, e.g., using BindsNet.
A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. The formalized models can be used to further refine a comprehensive theory of cognition and as a useful artificial intelligence program. Successful cognitive architectures include ACT-R and SOAR. The research on cognitive architectures as software instantiation of cognitive theories was initiated by Allen Newell in 1990.
A neural network can refer to either a neural circuit of biological neurons, or a network of artificial neurons or nodes in the case of an artificial neural network. Artificial neural networks are used for solving artificial intelligence (AI) problems; they model connections of biological neurons as weights between nodes. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred to as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1.
An artificial brain is software and hardware with cognitive abilities similar to those of the animal or human brain.
Stephen Grossberg is a cognitive scientist, theoretical and computational psychologist, neuroscientist, mathematician, biomedical engineer, and neuromorphic technologist. He is the Wang Professor of Cognitive and Neural Systems and a Professor Emeritus of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering at Boston University.
Neurorobotics is the combined study of neuroscience, robotics, and artificial intelligence. It is the science and technology of embodied autonomous neural systems. Neural systems include brain-inspired algorithms, computational models of biological neural networks and actual biological systems. Such neural systems can be embodied in machines with mechanic or any other forms of physical actuation. This includes robots, prosthetic or wearable systems but also, at smaller scale, micro-machines and, at the larger scales, furniture and infrastructures.
A memristor is a non-linear two-terminal electrical component relating electric charge and magnetic flux linkage. It was described and named in 1971 by Leon Chua, completing a theoretical quartet of fundamental electrical components which also comprises the resistor, capacitor and inductor.
Artificial intelligence (AI) has been used in applications to alleviate certain problems throughout industry and academia. AI, like electricity or computers, is a general-purpose technology that has a multitude of applications. It has been used in fields of language translation, image recognition, credit scoring, e-commerce and other domains.
Daniela L. Rus is a roboticist and computer scientist, Director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and the Andrew and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science (EECS) at the Massachusetts Institute of Technology.
Yann André LeCun is a Turing Award winning French computer scientist working primarily in the fields of machine learning, computer vision, mobile robotics and computational neuroscience. He is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University and Vice-President, Chief AI Scientist at Meta.
A physical neural network is a type of artificial neural network in which an electrically adjustable material is used to emulate the function of a neural synapse or a higher-order (dendritic) neuron model. "Physical" neural network is used to emphasize the reliance on physical hardware used to emulate neurons as opposed to software-based approaches. More generally the term is applicable to other artificial neural networks in which a memristor or other electrically adjustable resistance material is used to emulate a neural synapse.
SyNAPSE is a DARPA program that aims to develop electronic neuromorphic machine technology, an attempt to build a new kind of cognitive computer with form, function, and architecture similar to the mammalian brain. Such artificial brains would be used in robots whose intelligence would scale with the size of the neural system in terms of the total number of neurons and synapses and their connectivity.
A cognitive computer is a computer that hardwires artificial intelligence and machine-learning algorithms into an integrated circuit that closely reproduces the behavior of the human brain. It generally adopts a neuromorphic engineering approach. Synonyms are neuromorphic chip and cognitive chip.
Google Brain was a deep learning artificial intelligence research team under the umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, Google Brain combined open-ended machine learning research with information systems and large-scale computing resources. The team has created tools such as TensorFlow, which allow for neural networks to be used by the public, with multiple internal AI research projects. The team aims to create research opportunities in machine learning and natural language processing. The team was merged into former Google sister company DeepMind to form Google DeepMind in April 2023.
An AI accelerator is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2018, a typical AI integrated circuit chip contains billions of MOSFET transistors. A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design.
Louis Barry Rosenberg is an American engineer, researcher, inventor, and entrepreneur. He researches augmented reality, virtual reality, and artificial intelligence. He was the Cotchett Endowed Professor of Educational Technology at the California Polytechnic State University, San Luis Obispo. He founded the Immersion Corporation and Unanimous A.I., and he wrote the screenplay for the 2009 romantic comedy film, Lab Rats.
Hai (Helen) Li is a Chinese-American electrical and computer engineer known for her research on neuromorphic engineering, the development of computation systems based on physical artificial neurons, and on deep learning, techniques for using deep neural networks in machine learning. She is Clare Boothe Luce Professor of Electrical and Computer Engineering at Duke University.
BrainChip is an Australia-based technology company, founded in 2004 by Peter Van Der Made, that specializes in developing advanced artificial intelligence (AI) and machine learning (ML) hardware. The company's primary products are the MetaTF development environment, which allows the training and deployment of spiking neural networks (SNN), and the AKD1000 neuromorphic processor, a hardware implementation of their spiking neural network system. BrainChip's technology is based on a neuromorphic computing architecture, which attempts to mimic the way the human brain works. The company is a part of Intel Foundry Services and Arm AI partnership.