Rectifier (disambiguation)

Last updated

The word rectifier refers to the general act of straightening. It may refer to:

Rectifier AC-DC conversion device; electrical device that converts alternating current (AC), which periodically reverses direction, to direct current (DC), which flows in only one direction

A rectifier is an electrical device that converts alternating current (AC), which periodically reverses direction, to direct current (DC), which flows in only one direction.

Rectifier (neural networks) rectifier curve

In the context of artificial neural networks, the rectifier is an activation function defined as the positive part of its argument:

Mesa Boogie company

Mesa/Boogie is an American company in Petaluma, California, that manufactures amplifiers for guitars and basses. It has been in operation since 1969.

See also


Related Research Articles

Artificial neural network computational model used in machine learning, computer science and other research disciplines, which is based on a large collection of connected simple units called artificial neurons, loosely analogous to axons in a biological brain

Artificial neural networks (ANN) or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains. The neural network itself is not an algorithm, but rather a framework for many different machine learning algorithms to work together and process complex data inputs. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the results to identify cats in other images. They do this without any prior knowledge about cats, for example, that they have fur, tails, whiskers and cat-like faces. Instead, they automatically generate identifying characteristics from the learning material that they process.

Direct current Unidirectional flow of electric charge

Direct current (DC) is the unidirectional flow of electric charge. A battery is a good example of a DC power supply. Direct current may flow in a conductor such as a wire, but can also flow through semiconductors, insulators, or even through a vacuum as in electron or ion beams. The electric current flows in a constant direction, distinguishing it from alternating current (AC). A term formerly used for this type of current was galvanic current.

Diode bridge

A diode bridge is an arrangement of four diodes in a bridge circuit configuration that provides the same polarity of output for either polarity of input.

Silicon controlled rectifier semiconductor electronic device with three p-n junctions, mainly used in devices where the control of high power is demanded

A silicon controlled rectifier or semiconductor controlled rectifier is a four-layer solid-state current-controlling device. The principle of four-layer p–n–p–n switching was developed by Moll, Tanenbaum, Goldey and Holonyak of Bell Laboratories in 1956. The practical demonstration of silicon controlled switching and detailed theoretical behavior of a device in agreement with the experimental results was presented by Dr Ian M. Mackintosh of Bell Laboratories in January 1958. The name "silicon controlled rectifier" is General Electric's trade name for a type of thyristor. The SCR was developed by a team of power engineers led by Gordon Hall and commercialized by Frank W. "Bill" Gutzwiller in 1957.

An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network. Artificial neurons are elementary units in an artificial neural network. The artificial neuron receives one or more inputs and sums them to produce an output. Usually each input is separately weighted, and the sum is passed through a non-linear function known as an activation function or transfer function. The transfer functions usually have a sigmoid shape, but they may also take the form of other non-linear functions, piecewise linear functions, or step functions. They are also often monotonically increasing, continuous, differentiable and bounded. The thresholding function has inspired building logic gates referred to as threshold logic; applicable to building logic circuits resembling brain processing. For example, new devices such as memristors have been extensively used to develop such logic in recent times.

Geoffrey Hinton British-Canadian computer scientist and psychologist

Geoffrey Everest Hinton is an English Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks. Since 2013 he divides his time working for Google and the University of Toronto.

Rotary converter

A rotary converter is a type of electrical machine which acts as a mechanical rectifier, inverter or frequency converter.

Mercury-arc valve electrical equipment for converting high-voltage or -current alternating current into direct current

A mercury-arc valve or mercury-vapor rectifier or (UK) mercury-arc rectifier is a type of electrical rectifier used for converting high-voltage or high-current alternating current (AC) into direct current (DC). It is a type of cold cathode gas-filled tube, but is unusual in that the cathode, instead of being solid, is made from a pool of liquid mercury and is therefore self-restoring. As a result, mercury-arc valves were much more rugged and long-lasting, and could carry much higher currents than most other types of gas discharge tube.

Traction substation electrical substation for railways, or trams

A traction substation, traction current converter plant or traction power substation (TPSS) is an electrical substation that converts electric power from the form provided by the electrical power industry for public utility service to an appropriate voltage, current type and frequency to supply railways, trams (streetcars) or trolleybuses with traction current.

Recurrent neural network class of artificial neural network where connections between units form a directed cycle

A recurrent neural network (RNN) is a class of artificial neural network where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition.

Neural circuit network or circuit of neurons

A neural circuit is a population of neurons interconnected by synapses to carry out a specific function when activated. Neural circuits interconnect to one another to form large scale brain networks. Biological neural networks have inspired the design of artificial neural networks, but artificial neural networks are usually not strict copies of their biological counterparts

Neural network Structure in biology and artificial intelligence

A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled as weights. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1.

Multilayer perceptron

A multilayer perceptron (MLP) is a class of feedforward artificial neural network. A MLP consists of at least three layers of nodes: an input layer, a hidden layer and an output layer. Except for the input nodes, each node is a neuron that uses a nonlinear activation function. MLP utilizes a supervised learning technique called backpropagation for training. Its multiple layers and non-linear activation distinguish MLP from a linear perceptron. It can distinguish data that is not linearly separable.

Spiking neural network

Spiking neural networks (SNNs) are artificial neural network models that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs also incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not fire at each propagation cycle, but rather fire only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value. When a neuron fires, it generates a signal which travels to other neurons which, in turn, increase or decrease their potentials in accordance with this signal.

Activation function In computational networks, the activation function of a node defines the output of that node given an input or set of inputs.

In artificial neural networks, the activation function of a node defines the output of that node, or "neuron," given an input or set of inputs. This output is then used as input for the next node and so on until a desired solution to the original problem is found.

Neural network refers to interconnected populations of neurons or neuron simulations that form the structure and architecture of nervous systems, in animals, humans, and computing systems:

Deep learning branch of machine learning

Deep learning is part of a broader family of machine learning methods based on the layers used in artificial neural networks. Learning can be supervised, semi-supervised or unsupervised.

Vanishing gradient problem

In machine learning, the vanishing gradient problem is a difficulty found in training artificial neural networks with gradient-based learning methods and backpropagation. In such methods, each of the neural network's weights receives an update proportional to the partial derivative of the error function with respect to the current weight in each iteration of training. The problem is that in some cases, the gradient will be vanishingly small, effectively preventing the weight from changing its value. In the worst case, this may completely stop the neural network from further training. As one example of the problem cause, traditional activation functions such as the hyperbolic tangent function have gradients in the range (0, 1), and backpropagation computes gradients by the chain rule. This has the effect of multiplying n of these small numbers to compute gradients of the "front" layers in an n-layer network, meaning that the gradient decreases exponentially with n while the front layers train very slowly.