Company type | Public |
---|---|
Industry | Artificial Intelligence hardware and software provider Semiconductor Design & Manufacturing |
Founded | 2004 |
Founder | Peter van der Made |
Headquarters | , Australia |
Area served | Worldwide |
Key people | Sean Hehir (CEO) Peter van der Made (Founder and CTO, Executive Director) ContentsAnil Mankar (Co-founder, Chief Development Officer) Ken Scarince (Chief Financial Officer) Nandan Nayampally (Chief Marketing Officer) Rob Telson (Vice President Ecosystems & Partnerships) Steve Thorne (Vice President of Worldwide Sales) Professor Adam Osseiran (Chairman of the SAB) Professor Barry Marshall, NL (Member of the SAB) Professor Alan Harvey (Member of the SAB) |
Website | https://brainchip.com/ |
BrainChip (ASX:BRN, OTCQX:BRCHF) is an Australia-based technology company, founded in 2004 by Peter Van Der Made, [1] that specializes in developing advanced artificial intelligence (AI) and machine learning (ML) hardware. [2] The company's primary products are the MetaTF development environment, which allows the training and deployment of spiking neural networks (SNN), and the AKD1000 neuromorphic processor, a hardware implementation of their spiking neural network system. BrainChip's technology is based on a neuromorphic computing architecture, which attempts to mimic the way the human brain works. The company is a part of Intel Foundry Services and Arm AI partnership. [3] [4]
Australian mining company Aziana acquired BrainChip in March 2015. [5] Later, via a reverse merger of the now dormant Aziana [6] in September 2015 BrainChip was put on the Australian Stock Exchange (ASX), and van der Made started commercializing his original idea for artificial intelligence processor hardware. In 2016, the company appointed former Exar CEO Louis Di Nardo as CEO; Van Der Made then took the position of CTO. [7] In October 2021, the company announced that it was taking orders for its Akida AI Processor Development Kits, [8] [9] and in January 2022, that it was taking orders for its Akida AI Processor PCIe boards. [10] In April 2022, BrainChip partnered with NVISO to provide collaboration with applications and technologies. [11] In November 2022, BrainChip added the Rochester Institute of Technology to its University AI accelerator program. [12] The next month, BrainChip was a part of Intel Foundry Services. [4] In January 2023, Edge Impulse announced support for BrainChip's AKD processor. [13]
The MetaTF software is designed to work with a variety of image, video, and sensor data, and is intended to be implemented in a range of applications, including security, surveillance, autonomous vehicles, and industrial automation. The software uses Python to create spiking neural networks (or convert other neural networks to SNNs) for use on the AKD processor hardware. The software is also capable of SNN deployment on normal processors. [14]
The Akida 1000 processor [15] is an event-based neural processing device with 1.2 million artificial neurons and 10 billion artificial synapses. Utilizing event-based possessing, it analyzes essential inputs at specific points. Results are stored in the on-chip memory units. [16]
The processor contains 80 nodes that communicate over a mesh network. Each node consists of four either convolutional or fully connected Neural Processing Units (NPUs), coupled with individual memory units. Akida runs an entire neural network executing all neuron layers in parallel. The design elements are meant to allow inference and incremental learning on edge devices with lower power consumption. [17] [18]
On January 29, 2023, BrainChip announced that it has completed the design of its AKD1500 reference chip. [19] On March 6, 2023, BrainChip announced the second generation of its Akida platform. BrainChip added support for 8-bit weights and activations, Vision Transformer (ViT) engine, and hardware support for a Temporal Event-Based Neural Net (TENN). [20] [21] On March 12, 2023, BrainChip announced that the Akida processor family integrates with the Arm® Cortex®-M85 processor. [22]
Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems. The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, transistors, among others. Training software-based neuromorphic systems of spiking neural networks can be achieved using error backpropagation, e.g., using Python based frameworks such as snnTorch, or using canonical learning rules from the biological learning literature, e.g., using BindsNet.
The transistor count is the number of transistors in an electronic device. It is the most common measure of integrated circuit complexity. The rate at which MOS transistor counts have increased generally follows Moore's law, which observes that transistor count doubles approximately every two years. However, being directly proportional to the area of a die, transistor count does not represent how advanced the corresponding manufacturing technology is. A better indication of this is transistor density which is the ratio of a semiconductor's transistor count to its die area.
Spiking neural networks (SNNs) are artificial neural networks (ANN) that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle, but rather transmit information only when a membrane potential—an intrinsic quality of the neuron related to its membrane electrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.
Cadence Design Systems, Inc., is an American multinational technology and computational software company. Headquartered in San Jose, California, Cadence was formed in 1988 through the merger of SDA Systems and ECAD. Initially specialized in electronic design automation (EDA) software for the semiconductor industry, currently the company makes software and hardware for designing products such as integrated circuits, systems on chips (SoCs), printed circuit boards, and pharmaceutical drugs, also licensing intellectual property for the electronics, aerospace, defense and automotive industries, among others.
Dolphin Design is a semiconductor design company, founded in 2018, formerly known as Dolphin Integration, based in Meylan in the Grenoble region (France).
Massimiliano Versace is the co-founder and the CEO of Neurala Inc, a Boston-based company building Artificial Intelligence emulating brain function in software and used in automating the process of visual inspection in manufacturing. He is also the founding Director of the Boston University Neuromorphics Lab. Massimiliano Versace is a Fulbright scholar and holds two PhD in Experimental Psychology from the University of Trieste, Italy and Cognitive and Neural Systems from Boston University, USA. He obtained his BSc from the University of Trieste, Italy.
Kwabena Adu Boahen is a Ghanaian-born Professor of Bioengineering and Electrical Engineering at Stanford University. He previously taught at the University of Pennsylvania.
SpiNNaker is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group (APT) at the Department of Computer Science, University of Manchester. It is composed of 57,600 processing nodes, each with 18 ARM9 processors and 128 MB of mobile DDR SDRAM, totalling 1,036,800 cores and over 7 TB of RAM. The computing platform is based on spiking neural networks, useful in simulating the human brain.
A cognitive computer is a computer that hardwires artificial intelligence and machine learning algorithms into an integrated circuit that closely reproduces the behavior of the human brain. It generally adopts a neuromorphic engineering approach. Synonyms include neuromorphic chip and cognitive chip.
The IEEE International Electron Devices Meeting (IEDM) is an annual micro- and nanoelectronics conference held each December that serves as a forum for reporting technological breakthroughs in the areas of semiconductor and related device technologies, design, manufacturing, physics, modeling and circuit-device interaction.
Zeroth is a platform for brain-inspired computing from Qualcomm. It is based around a neural processing unit (NPU) AI accelerator chip and a software API to interact with the platform. It makes a form of machine learning known as deep learning available to mobile devices. It is used for image and sound processing, including speech recognition. The software operates locally rather than as a cloud application.
Movidius is a company based in San Mateo, California, that designs low-power processor chips for computer vision. The company was acquired by Intel in September 2016.
A vision processing unit (VPU) is an emerging class of microprocessor; it is a specific type of AI accelerator, designed to accelerate machine vision tasks.
Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, using Google's own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third-party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale.
An AI accelerator, deep learning processor, or neural processing unit (NPU) is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024, a typical AI integrated circuit chip contains tens of billions of MOSFETs.
Electrochemical Random-Access Memory (ECRAM) is a type of non-volatile memory (NVM) with multiple levels per cell (MLC) designed for deep learning analog acceleration. An ECRAM cell is a three-terminal device composed of a conductive channel, an insulating electrolyte, an ionic reservoir, and metal contacts. The resistance of the channel is modulated by ionic exchange at the interface between the channel and the electrolyte upon application of an electric field. The charge-transfer process allows both for state retention in the absence of applied power, and for programming of multiple distinct levels, both differentiating ECRAM operation from that of a field-effect transistor (FET). The write operation is deterministic and can result in symmetrical potentiation and depression, making ECRAM arrays attractive for acting as artificial synaptic weights in physical implementations of artificial neural networks (ANN). The technological challenges include open circuit potential (OCP) and semiconductor foundry compatibility associated with energy materials. Universities, government laboratories, and corporate research teams have contributed to the development of ECRAM for analog computing. Notably, Sandia National Laboratories designed a lithium-based cell inspired by solid-state battery materials, Stanford University built an organic proton-based cell, and International Business Machines (IBM) demonstrated in-memory selector-free parallel programming for a logistic regression task in an array of metal-oxide ECRAM designed for insertion in the back end of line (BEOL). In 2022, researchers at Massachusetts Institute of Technology built an inorganic, CMOS-compatible protonic technology that achieved near-ideal modulation characteristics using nanosecond fast pulses
Weebit Nano is a public semiconductor IP company founded in Israel in 2015 and headquartered in Hod Hasharon, Israel. The company develops Resistive Random-Access Memory technologies. Resistive Random-Access Memory is a specialized form of non-volatile memory (NVM) for the semiconductor industry. The company’s products are targeted at a broad range of NVM markets where persistence, performance, and endurance are all required. ReRAM technology can be integrated in electronic devices like wearables, Internet of Things (IoT) endpoints, smartphones, robotics, autonomous vehicles, and 5G cellular communications, among other products. Weebit Nano’s IP can be licensed to semiconductor companies and semiconductor fabs.
Specialized computer hardware is often used to execute artificial intelligence (AI) programs faster, and with less energy, such as Lisp machines, neuromorphic engineering, event cameras, and physical neural networks. As of 2023, the market for AI hardware is dominated by GPUs.
Nikola Kirilov Kasabov also known as Nikola Kirilov Kassabov is a Bulgarian and New Zealand computer scientist, academic and author. He is a professor emeritus of Knowledge Engineering at Auckland University of Technology, Founding Director of the Knowledge Engineering and Discovery Research Institute (KEDRI), George Moore Chair of Data Analytics at Ulster University, as well as visiting professor at both the Institute for Information and Communication Technologies (IICT) at the Bulgarian Academy of Sciences and Dalian University in China. He is also the Founder and Director of Knowledge Engineering Consulting.