Milnor number

Last updated

In mathematics, and particularly singularity theory, the Milnor number, named after John Milnor, is an invariant of a function germ.

Contents

If f is a complex-valued holomorphic function germ then the Milnor number of f, denoted μ(f), is either a nonnegative integer, or is infinite. It can be considered both a geometric invariant and an algebraic invariant. This is why it plays an important role in algebraic geometry and singularity theory.

Algebraic definition

Consider a holomorphic complex function germ

and denote by the ring of all function germs . Every level of a function is a complex hypersurface in , therefore we will call a hypersurface singularity.

Assume it is an isolated singularity: in the case of holomorphic mappings we say that a hypersurface singularity is singular at if its gradient is zero at , and we say that is an isolated singular point if it is the only singular point in a sufficiently small neighbourhood of . In particular, the multiplicity of the gradient

is finite by an application of Rückert's Nullstellensatz. This number is the Milnor number of singularity at .

Note that the multiplicity of the gradient is finite if and only if the origin is an isolated critical point of f.

Geometric interpretation

Milnor originally [1] introduced in geometric terms in the following way. All fibers for values close to are nonsingular manifolds of real dimension . Their intersection with a small open disc centered at is a smooth manifold called the Milnor fiber. Up to diffeomorphism does not depend on or if they are small enough. It is also diffeomorphic to the fiber of the Milnor fibration map.

The Milnor fiber is a smooth manifold of dimension and has the same homotopy type as a bouquet of spheres . This is to say that its middle Betti number is equal to the Milnor number and it has homology of a point in dimension less than . For example, a complex plane curve near every singular point has its Milnor fiber homotopic to a wedge of circles (Milnor number is a local property, so it can have different values at different singular points).

Thus we have equalities

Milnor number = number of spheres in the wedge = middle Betti number of = degree of the map on = multiplicity of the gradient

Another way of looking at Milnor number is by perturbation. We say that a point is a degenerate singular point, or that f has a degenerate singularity, at if is a singular point and the Hessian matrix of all second order partial derivatives has zero determinant at :

We assume that f has a degenerate singularity at 0. We can speak about the multiplicity of this degenerate singularity by thinking about how many points are infinitesimally glued. If we now perturb the image of f in a certain stable way the isolated degenerate singularity at 0 will split up into other isolated singularities which are non-degenerate! The number of such isolated non-degenerate singularities will be the number of points that have been infinitesimally glued.

Precisely, we take another function germ g which is non-singular at the origin and consider the new function germ h := f + εg where ε is very small. When ε = 0 then h = f. The function h is called the morsification of f. It is very difficult to compute the singularities of h, and indeed it may be computationally impossible. This number of points that have been infinitesimally glued, this local multiplicity of f, is exactly the Milnor number of f.

Further contributions [2] give meaning to Milnor number in terms of dimension of the space of versal deformations, i.e. the Milnor number is the minimal dimension of parameter space of deformations that carry all information about initial singularity.

Examples

Here we give some worked examples in two variables. Working with only one is too simple and does not give a feel for the techniques, whereas working with three variables can be quite tricky. Two is a nice number. Also we stick to polynomials. If f is only holomorphic and not a polynomial, then we could have worked with the power series expansion of f.

1

Consider a function germ with a non-degenerate singularity at 0, say . The Jacobian ideal is just . We next compute the local algebra:

To see why this is true we can use Hadamard's lemma which says that we can write any function as

for some constant k and functions and in (where either or or both may be exactly zero). So, modulo functional multiples of x and y, we can write h as a constant. The space of constant functions is spanned by 1, hence

It follows that μ(f) = 1. It is easy to check that for any function germ g with a non-degenerate singularity at 0 we get μ(g) = 1.

Note that applying this method to a non-singular function germ g we get μ(g) = 0.

2

Let , then

So in this case .

3

One can show that if then

This can be explained by the fact that f is singular at every point of the x-axis.

Versal Deformations

Let f have finite Milnor number μ, and let be a basis for the local algebra, considered as a vector space. Then a miniversal deformation of f is given by

where . These deformations (or unfoldings) are of great interest in much of science. [ citation needed ]

Invariance

We can collect function germs together to construct equivalence classes. One standard equivalence is A-equivalence. We say that two function germs are A-equivalent if there exist diffeomorphism germs and such that : there exists a diffeomorphic change of variable in both domain and range which takes f to g.

If f and g are A-equivalent then μ(f) = μ(g). Reference for this?

Nevertheless, the Milnor number does not offer a complete invariant for function germs, i.e. the converse is false: there exist function germs f and g with μ(f) = μ(g) which are not A-equivalent. To see this consider and . We have but f and g are clearly not A-equivalent since the Hessian matrix of f is equal to zero while that of g is not (and the rank of the Hessian is an A-invariant, as is easy to see).

Related Research Articles

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.

Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

In mathematics, particularly linear algebra, an orthonormal basis for an inner product space with finite dimension is a basis for whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, the standard basis for a Euclidean space is an orthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basis under a rotation or reflection is also orthonormal, and every orthonormal basis for arises in this fashion.

The Fock space is an algebraic construction used in quantum mechanics to construct the quantum states space of a variable or unknown number of identical particles from a single particle Hilbert space H. It is named after V. A. Fock who first introduced it in his 1932 paper "Konfigurationsraum und zweite Quantelung".

<span class="mw-page-title-main">Lattice model (physics)</span>

In mathematical physics, a lattice model is a mathematical model of a physical system that is defined on a lattice, as opposed to a continuum, such as the continuum of space or spacetime. Lattice models originally occurred in the context of condensed matter physics, where the atoms of a crystal automatically form a lattice. Currently, lattice models are quite popular in theoretical physics, for many reasons. Some models are exactly solvable, and thus offer insight into physics beyond what can be learned from perturbation theory. Lattice models are also ideal for study by the methods of computational physics, as the discretization of any continuum model automatically turns it into a lattice model. The exact solution to many of these models includes the presence of solitons. Techniques for solving these include the inverse scattering transform and the method of Lax pairs, the Yang–Baxter equation and quantum groups. The solution of these models has given insights into the nature of phase transitions, magnetization and scaling behaviour, as well as insights into the nature of quantum field theory. Physical lattice models frequently occur as an approximation to a continuum theory, either to give an ultraviolet cutoff to the theory to prevent divergences or to perform numerical computations. An example of a continuum theory that is widely studied by lattice models is the QCD lattice model, a discretization of quantum chromodynamics. However, digital physics considers nature fundamentally discrete at the Planck scale, which imposes upper limit to the density of information, aka Holographic principle. More generally, lattice gauge theory and lattice field theory are areas of study. Lattice models are also used to simulate the structure and dynamics of polymers.

In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value evaluated with respect to the conditional probability distribution. If the random variable can take on only a finite number of values, the "conditions" are that the variable can only take on a subset of those values. More formally, in the case when the random variable is defined over a discrete probability space, the "conditions" are a partition of this probability space.

<span class="mw-page-title-main">Radon transform</span> Integral transform

In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.

A Tsirelson bound is an upper limit to quantum mechanical correlations between distant events. Given that quantum mechanics violates Bell inequalities, a natural question to ask is how large can the violation be. The answer is precisely the Tsirelson bound for the particular Bell inequality in question. In general, this bound is lower than the bound that would be obtained if more general theories, only constrained by "no-signalling", were considered, and much research has been dedicated to the question of why this is the case.

In mathematics, subgroup growth is a branch of group theory, dealing with quantitative questions about subgroups of a given group.

In mathematics, Bochner's theorem characterizes the Fourier transform of a positive finite Borel measure on the real line. More generally in harmonic analysis, Bochner's theorem asserts that under Fourier transform a continuous positive-definite function on a locally compact abelian group corresponds to a finite positive measure on the Pontryagin dual group. The case of sequences was first established by Gustav Herglotz

<span class="mw-page-title-main">Complex torus</span>

In mathematics, a complex torus is a particular kind of complex manifold M whose underlying smooth manifold is a torus in the usual sense. Here N must be the even number 2n, where n is the complex dimension of M.

In 3-dimensional topology, a part of the mathematical field of geometric topology, the Casson invariant is an integer-valued invariant of oriented integral homology 3-spheres, introduced by Andrew Casson.

In mathematics, an unfolding of a smooth real-valued function ƒ on a smooth manifold, is a certain family of functions that includes ƒ.

In mathematics, the Fortuin–Kasteleyn–Ginibre (FKG) inequality is a correlation inequality, a fundamental tool in statistical mechanics and probabilistic combinatorics, due to Cees M. Fortuin, Pieter W. Kasteleyn, and Jean Ginibre. Informally, it says that in many random systems, increasing events are positively correlated, while an increasing and a decreasing event are negatively correlated. It was obtained by studying the random cluster model.

Quadratic unconstrained binary optimization (QUBO), also known as unconstrained binary quadratic programming (UBQP), is a combinatorial optimization problem with a wide range of applications from finance and economics to machine learning. QUBO is an NP hard problem, and for many classical problems from theoretical computer science, like maximum cut, graph coloring and the partition problem, embeddings into QUBO have been formulated. Embeddings for machine learning models include support-vector machines, clustering and probabilistic graphical models. Moreover, due to its close connection to Ising models, QUBO constitutes a central problem class for adiabatic quantum computation, where it is solved through a physical process called quantum annealing.

Coherent states have been introduced in a physical context, first as quasi-classical states in quantum mechanics, then as the backbone of quantum optics and they are described in that spirit in the article Coherent states. However, they have generated a huge variety of generalizations, which have led to a tremendous amount of literature in mathematical physics. In this article, we sketch the main directions of research on this line. For further details, we refer to several existing surveys.

Proximal gradientmethods for learning is an area of research in optimization and statistical learning theory which studies algorithms for a general class of convex regularization problems where the regularization penalty may not be differentiable. One such example is regularization of the form

In machine learning, the kernel embedding of distributions comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.

Distributional data analysis is a branch of nonparametric statistics that is related to functional data analysis. It is concerned with random objects that are probability distributions, i.e., the statistical analysis of samples of random distributions where each atom of a sample is a distribution. One of the main challenges in distributional data analysis is that although the space of probability distributions is a convex space, it is not a vector space.

References

  1. Milnor, John (1969). Singular points of Complex Hypersurfaces. Annals of Mathematics Studies. Princeton University Press.
  2. Arnold, V.I.; Gusein-Zade, S.M.; Varchenko, A.N. (1988). Singularities of differentiable maps. Vol. 2. Birkhäuser.