Lenia is a family of cellular automata created by Bert Wang-Chak Chan. [1] [2] [3] It is intended to be a continuous generalization of Conway's Game of Life, with continuous states, space and time. As a consequence of its continuous, high-resolution domain, the complex autonomous patterns ("lifeforms" or "spaceships") generated in Lenia are described as differing from those appearing in other cellular automata, being "geometric, metameric, fuzzy, resilient, adaptive, and rule-generic". [1]
Lenia won the 2018 Virtual Creatures Contest at the Genetic and Evolutionary Computation Conference in Kyoto, [4] an honorable mention for the ALIFE Art Award at ALIFE 2018 in Tokyo, [5] and Outstanding Publication of 2019 by the International Society for Artificial Life (ISAL). [6]
Let be the lattice or grid containing a set of states . Like many cellular automata, Lenia is updated iteratively; each output state is a pure function of the previous state, such that
where is the initial state and is the global rule, representing the application of the local rule over every site . Thus .
If the simulation is advanced by at each timestep, then the time resolution .
Let with maximum . This is the state set of the automaton and characterizes the possible states that may be found at each site. Larger correspond to higher state resolutions in the simulation. Many cellular automata use the lowest possible state resolution, i.e. . Lenia allows for much higher resolutions. Note that the actual value at each site is not in but rather an integer multiple of ; therefore we have for all . For example, given , .
Mathematically, neighborhoods like those in Game of Life may be represented using a set of position vectors in . For the classic Moore neighborhood used by Game of Life, for instance, ; i.e. a square of size 3 centered on every site.
In Lenia's case, the neighborhood is instead a ball of radius centered on a site, , which may include the original site itself.
Note that the neighborhood vectors are not the absolute position of the elements, but rather a set of relative positions (deltas) with respect to any given site.
There are discrete and continuous variants of Lenia. Let be a vector in within representing the position of a given site, and be the set of sites neighboring . Both variations comprise two stages:
Once is computed, it is scaled by the chosen time resolution and added to the original state value:Here, the clip function is defined by .
The local rules are defined as follows for discrete and continuous Lenia:
There are many ways to generate the convolution kernel . The final kernel is the composition of a kernel shell and a kernel skeleton.
For the kernel shell , Chan gives several functions that are defined radially. Kernel shell functions are unimodal and subject to the constraint (and typically as well). Example kernel functions include:
Here, is the indicator function.
Once the kernel shell has been defined, the kernel skeleton is used to expand it and compute the actual values of the kernel by transforming the shell into a series of concentric rings. The height of each ring is controlled by a kernel peak vector , where is the rank of the parameter vector. Then the kernel skeleton is defined as
The final kernel is therefore
such that is normalized to have an element sum of and (for conservation of mass). in the discrete case, and in the continuous case.
The growth mapping , which is analogous to an activation function, may be any function that is unimodal, nonmonotonic, and accepts parameters . Examples include
where is a potential value drawn from .
The Game of Life may be regarded as a special case of discrete Lenia with . In this case, the kernel would be rectangular, with the functionand the growth rule also rectangular, with .
By varying the convolutional kernel, the growth mapping and the initial condition, over 400 "species" of "life" have been discovered in Lenia, displaying "self-organization, self-repair, bilateral and radial symmetries, locomotive dynamics, and sometimes chaotic nature". [7] Chan has created a taxonomy for these patterns. [1]
Other works have noted the strong similarity between cellular automata update rules and convolutions. Indeed, these works have focused on reproducing cellular automata using simplified convolutional neural networks. Mordvintsev et al. investigated the emergence of self-repairing pattern generation. [9] Gilpin found that any cellular automaton could be represented as a convolutional neural network, and trained neural networks to reproduce existing cellular automata [8]
In this light, cellular automata may be seen as a special case of recurrent convolutional neural networks. Lenia's update rule may also be seen as a single-layer convolution (the "potential field" ) with an activation function (the "growth mapping" ). However, Lenia uses far larger, fixed, kernels and is not trained via gradient descent.
In mathematics, convolution is a mathematical operation on two functions that produces a third function. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result. Graphically, it expresses how the 'shape' of one function is modified by the other.
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.
The stress–energy tensor, sometimes called the stress–energy–momentum tensor or the energy–momentum tensor, is a tensor physical quantity that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. This density and flux of energy and momentum are the sources of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity.
Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. This is the first of two theorems proven by mathematician Emmy Noether in 1915 and published in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries of physical space.
In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker-Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc.
In special relativity, four-momentum (also called momentum–energy or momenergy) is the generalization of the classical three-dimensional momentum to four-dimensional spacetime. Momentum is a vector in three dimensions; similarly four-momentum is a four-vector in spacetime. The contravariant four-momentum of a particle with relativistic energy E and three-momentum p = (px, py, pz) = γmv, where v is the particle's three-velocity and γ the Lorentz factor, is
In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form and with parametric extension for arbitrary real constants a, b and non-zero c. It is named after the mathematician Carl Friedrich Gauss. The graph of a Gaussian is a characteristic symmetric "bell curve" shape. The parameter a is the height of the curve's peak, b is the position of the center of the peak, and c controls the width of the "bell".
Linear elasticity is a mathematical model of how solid objects deform and become internally stressed by prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.
In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.
In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology. The cross-correlation is similar in nature to the convolution of two functions. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy.
In physics and mathematics, the Helmholtz decomposition theorem or the fundamental theorem of vector calculus states that any sufficiently smooth, rapidly decaying vector field in three dimensions can be resolved into the sum of an irrotational (curl-free) vector field and a solenoidal (divergence-free) vector field. It is named after Hermann von Helmholtz.
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.
In physics, Maxwell's equations in curved spacetime govern the dynamics of the electromagnetic field in curved spacetime or where one uses an arbitrary coordinate system. These equations can be viewed as a generalization of the vacuum Maxwell's equations which are normally formulated in the local coordinates of flat spacetime. But because general relativity dictates that the presence of electromagnetic fields induce curvature in spacetime, Maxwell's equations in flat spacetime should be viewed as a convenient approximation.
In the fields of computer vision and image analysis, the Harris affine region detector belongs to the category of feature detection. Feature detection is a preprocessing step of several algorithms that rely on identifying characteristic points or interest points so to make correspondences between images, recognize textures, categorize objects or build panoramas.
In mathematical physics, the Belinfante–Rosenfeld tensor is a modification of the stress–energy tensor that is constructed from the canonical stress–energy tensor and the spin current so as to be symmetric yet still conserved.
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.
In machine learning, the kernel embedding of distributions comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.
The streamline upwind Petrov–Galerkin pressure-stabilizing Petrov–Galerkin formulation for incompressible Navier–Stokes equations can be used for finite element computations of high Reynolds number incompressible flow using equal order of finite element space by introducing additional stabilization terms in the Navier–Stokes Galerkin formulation.