Developer(s) | Peltarion |
---|---|
Operating system | Microsoft Windows |
Type | Neural network software |
License | EULA |
Website | Synapse homepage |
Synapse is a component-based development environment for neural networks and adaptive systems. Created by Peltarion , Synapse allows data mining, statistical analysis, visualization, preprocessing, design and training of neural networks and adaptive systems and the deployment of them. It utilizes a plug-in based architecture making it a general platform for signal processing. The first version of the product was released in May 2006.
Due to its plug in-based design, the usage of Synapse can be very general. Synapse is based on the Microsoft .NET framework and all Synapse components are also .NET components. Although Peltarion has yet to release an official API for the Synapse platform, user made components are emerging, some of them being original, demonstrating the openness of the platform.
The development cycle in Synapse is based on the canonical data mining cycle. A notable difference however is that in Synapse that cycle is not linear, but supports an iterative approach where the user can freely move between the steps. Synapse features four different operating modes that make up the development cycle.
The preprocessing mode is for data mining and data preparation. In this mode the user can import, visualize, explore and transform data in a variety of ways. Data is imported through the use of format components. The standard release includes format components for reading and writing data from CSV (text) files, SQL databases, images and XML. The imported data can be visualized through visualizer components and filters can be applied to the data. The filter components range from simple data rearrangement to more advanced FFT and outlier removal filters.
The visualizers include a variety of plots and grids, which can be interconnected and branched out to perform complex data mining tasks.
In design mode components are linked to construct a topology. Linked components enable a signal flow creating a pipe filter machine. When a signal is set on a component, it filters the signal in some way and the filtered signal can then be piped to the next component in the linked chain of components that form the topology. The components can be either static or adaptive. Besides regular filters, they can be sources or sinks (like plots or data loggers). The standard distribution of synapse comes with a variety of components, ranging from simple neural network components such as weight layers and function layers, to whole neural networks such as self-organizing maps and more complex static elements like for instance the fuzzy logic component. The control system is chosen and configured in design mode as well.
The training mode is used for training (adapting) the system, or more generally to start the control system that regulates the information flow. It is visually similar to design mode and the same components are displayed. As the components have support for context sensitive displays, they can have a different visual appearance in training. In addition to running the control system, training mode allows the execution of high-level optimizers such as genetic algorithms, particle swarm optimization and simulated annealing. Remote execution and training is also possible in this mode.
The postprocessing mode is for analyzing a trained system and the preparation of such a system for end use. System performance can be tested using statistical analysis, the sensitivity of the input-output relations of a system can be analyzed (sensitivity analysis) and reports can be generated.
One of the most important postprocessing components is the deployment component.
The deployment component allows the export of a system made in Synapse to a single .NET component. The system in the development environment is downscaled so that it only contains the minimal necessary requirements for execution and then compiled into an assembly. This assembly can then be used in any .NET framework or .NET Compact Framework application. The latter allows the deployment to embedded devices.
Example code in C# :
DeployedNeuralNetnet=newDeployedNeuralNet();// Create nn objectMatrixinput=someSensor.GetData();// Get data from some sensornet.Input_Sensor=input;// Set inputs to the nnnet.Run();// Run the nn control systemsomeMotor.Power=net.Output_Port0;// Set the power of some motor to the output of the nn
In machine learning, a neural network is a model inspired by the structure and function of biological neural networks in animal brains.
A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine learning technique used to produce a low-dimensional representation of a higher-dimensional data set while preserving the topological structure of the data. For example, a data set with variables measured in observations could be represented as clusters of observations with similar values for the variables. These clusters then could be visualized as a two-dimensional "map" such that observations in proximal clusters have more similar values than observations in distal clusters. This can make high-dimensional data easier to visualize and analyze.
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.
Unsupervised learning is a framework in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data. Other frameworks in the spectrum of supervisions include weak- or semi-supervision, where a small portion of the data is tagged, and self-supervision. Some researchers consider self-supervised learning a form of unsupervised learning.
An artificial neuron is a mathematical function conceived as a model of biological neurons in a neural network. Artificial neurons are the elementary units of artificial neural networks. The artificial neuron is a function that receives one or more inputs, applies weights to these inputs, and sums them to produce an output.
Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally close to its intrinsic dimension. Working in high-dimensional spaces can be undesirable for many reasons; raw data are often sparse as a consequence of the curse of dimensionality, and analyzing the data is usually computationally intractable. Dimensionality reduction is common in fields that deal with large numbers of observations and/or large numbers of variables, such as signal processing, speech recognition, neuroinformatics, and bioinformatics.
Intelligent control is a class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, reinforcement learning, evolutionary computation and genetic algorithms.
In software engineering, a pipeline consists of a chain of processing elements, arranged so that the output of each element is the input of the next. The concept is analogous to a physical pipeline. Usually some amount of buffering is provided between consecutive elements. The information that flows in these pipelines is often a stream of records, bytes, or bits, and the elements of a pipeline may be called filters. This is also called the pipe(s) and filters design pattern which is monolithic. Its advantages are simplicity and low cost while its disadvantages are lack of elasticity, fault tolerance and scalability. Connecting elements into a pipeline is analogous to function composition.
Beamforming or spatial filtering is a signal processing technique used in sensor arrays for directional signal transmission or reception. This is achieved by combining elements in an antenna array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Beamforming can be used at both the transmitting and receiving ends in order to achieve spatial selectivity. The improvement compared with omnidirectional reception/transmission is known as the directivity of the array.
Prognostics is an engineering discipline focused on predicting the time at which a system or a component will no longer perform its intended function. This lack of performance is most often a failure beyond which the system can no longer be used to meet desired performance. The predicted time then becomes the remaining useful life (RUL), which is an important concept in decision making for contingency mitigation. Prognostics predicts the future performance of a component by assessing the extent of deviation or degradation of a system from its expected normal operating conditions. The science of prognostics is based on the analysis of failure modes, detection of early signs of wear and aging, and fault conditions. An effective prognostics solution is implemented when there is sound knowledge of the failure mechanisms that are likely to cause the degradations leading to eventual failures in the system. It is therefore necessary to have initial information on the possible failures in a product. Such knowledge is important to identify the system parameters that are to be monitored. Potential uses for prognostics is in condition-based maintenance. The discipline that links studies of failure mechanisms to system lifecycle management is often referred to as prognostics and health management (PHM), sometimes also system health management (SHM) or—in transportation applications—vehicle health management (VHM) or engine health management (EHM). Technical approaches to building models in prognostics can be categorized broadly into data-driven approaches, model-based approaches, and hybrid approaches.
Orange is an open-source data visualization, machine learning and data mining toolkit. It features a visual programming front-end for exploratory qualitative data analysis and interactive data visualization.
Neural network software is used to simulate, research, develop, and apply artificial neural networks, software concepts adapted from biological neural networks, and in some cases, a wider array of adaptive systems such as artificial intelligence and machine learning.
Computational neurogenetic modeling (CNGM) is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biology, as well as engineering.
An echo state network (ESN) is a type of reservoir computer that uses a recurrent neural network with a sparsely connected hidden layer. The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can produce or reproduce specific temporal patterns. The main interest of this network is that although its behavior is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.
Spiking neural networks (SNNs) are artificial neural networks (ANN) that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle, but rather transmit information only when a membrane potential—an intrinsic quality of the neuron related to its membrane electrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.
Data preprocessing can refer to manipulation, filtration or augmentation of data before it is analyzed, and is often an important step in the data mining process. Data collection methods are often loosely controlled, resulting in out-of-range values, impossible data combinations, and missing values, amongst other issues.
Fault detection, isolation, and recovery (FDIR) is a subfield of control engineering which concerns itself with monitoring a system, identifying when a fault has occurred, and pinpointing the type of fault and its location. Two approaches can be distinguished: A direct pattern recognition of sensor readings that indicate a fault and an analysis of the discrepancy between the sensor readings and expected values, derived from some model. In the latter case, it is typical that a fault is said to be detected if the discrepancy or residual goes above a certain threshold. It is then the task of fault isolation to categorize the type of fault and its location in the machinery. Fault detection and isolation (FDI) techniques can be broadly classified into two categories. These include model-based FDI and signal processing based FDI.
AnimatLab is an open-source neuromechanical simulation tool that allows authors to easily build and test biomechanical models and the neural networks that control them to produce behaviors. Users can construct neural models of varied level of details, 3D mechanical models of triangle meshes, and use muscles, motors, receptive fields, stretch sensors and other transducers to interface the two systems. Experiments can be run in which various stimuli are applied and data is recorded, making it a useful tool for computational neuroscience. The software can also be used to model biomimetic robotic systems.
A convolutional neural network (CNN) is a regularized type of feed-forward neural network that learns features by itself via filter optimization. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer, 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution kernels, only 25 neurons are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features.
LeNet is a convolutional neural network structure proposed by LeCun et al. in 1998. In general, LeNet refers to LeNet-5 and is a simple convolutional neural network. Convolutional neural networks are a kind of feed-forward neural network whose artificial neurons can respond to a part of the surrounding cells in the coverage range and perform well in large-scale image processing.