# Geometrical optics

Last updated

Geometrical optics, or ray optics, is a model of optics that describes light propagation in terms of rays . The ray in geometric optics is an abstraction useful for approximating the paths along which light propagates under certain circumstances.

## Contents

The simplifying assumptions of geometrical optics include that light rays:

• propagate in straight-line paths as they travel in a homogeneous medium
• bend, and in particular circumstances may split in two, at the interface between two dissimilar media
• follow curved paths in a medium in which the refractive index changes
• may be absorbed or reflected.

Geometrical optics does not account for certain optical effects such as diffraction and interference. This simplification is useful in practice; it is an excellent approximation when the wavelength is small compared to the size of structures with which the light interacts. The techniques are particularly useful in describing geometrical aspects of imaging, including optical aberrations.

## Explanation

A light ray is a line or curve that is perpendicular to the light's wavefronts (and is therefore collinear with the wave vector). A slightly more rigorous definition of a light ray follows from Fermat's principle, which states that the path taken between two points by a ray of light is the path that can be traversed in the least time. [1]

Geometrical optics is often simplified by making the paraxial approximation, or "small angle approximation". The mathematical behavior then becomes linear, allowing optical components and systems to be described by simple matrices. This leads to the techniques of Gaussian optics and paraxial ray tracing , which are used to find basic properties of optical systems, such as approximate image and object positions and magnifications. [2]

## Reflection

Glossy surfaces such as mirrors reflect light in a simple, predictable way. This allows for production of reflected images that can be associated with an actual (real) or extrapolated (virtual) location in space.

With such surfaces, the direction of the reflected ray is determined by the angle the incident ray makes with the surface normal, a line perpendicular to the surface at the point where the ray hits. The incident and reflected rays lie in a single plane, and the angle between the reflected ray and the surface normal is the same as that between the incident ray and the normal. [3] This is known as the Law of Reflection.

For flat mirrors, the law of reflection implies that images of objects are upright and the same distance behind the mirror as the objects are in front of the mirror. The image size is the same as the object size. (The magnification of a flat mirror is equal to one.) The law also implies that mirror images are parity inverted, which is perceived as a left-right inversion.

Mirrors with curved surfaces can be modeled by ray tracing and using the law of reflection at each point on the surface. For mirrors with parabolic surfaces, parallel rays incident on the mirror produce reflected rays that converge at a common focus. Other curved surfaces may also focus light, but with aberrations due to the diverging shape causing the focus to be smeared out in space. In particular, spherical mirrors exhibit spherical aberration. Curved mirrors can form images with magnification greater than or less than one, and the image can be upright or inverted. An upright image formed by reflection in a mirror is always virtual, while an inverted image is real and can be projected onto a screen. [3]

## Refraction

Refraction occurs when light travels through an area of space that has a changing index of refraction. The simplest case of refraction occurs when there is an htryh nn43 53 5 5 5between a uniform medium with index of refraction ${\displaystyle n_{1}}$ and another medium with index of refraction ${\displaystyle n_{2}}$. In such situations, Snell's Law describes the resulting deflection of the light ray:

${\displaystyle n_{1}\sin \theta _{1}=n_{2}\sin \theta _{2}\ }$

where ${\displaystyle \theta _{1}}$ and ${\displaystyle \theta _{2}}$ are the angles between the normal (to the interface) and the incident and refracted waves, respectively. This phenomenon is also associated with a changing speed of light as seen from the definition of index of refraction provided above which implies:

${\displaystyle v_{1}\sin \theta _{2}\ =v_{2}\sin \theta _{1}}$

where ${\displaystyle v_{1}}$ and ${\displaystyle v_{2}}$ are the wave velocities through the respective media. [3]

Various consequences of Snell's Law include the fact that for light rays traveling from a material with a high index of refraction to a material with a low index of refraction, it is possible for the interaction with the interface to result in zero transmission. This phenomenon is called total internal reflection and allows for fiber optics technology. As light signals travel down a fiber optic cable, they undergo total internal reflection allowing for essentially no light lost over the length of the cable. It is also possible to produce polarized light rays using a combination of reflection and refraction: When a refracted ray and the reflected ray form a right angle, the reflected ray has the property of "plane polarization". The angle of incidence required for such a scenario is known as Brewster's angle. [3]

Snell's Law can be used to predict the deflection of light rays as they pass through "linear media" as long as the indexes of refraction and the geometry of the media are known. For example, the propagation of light through a prism results in the light ray being deflected depending on the shape and orientation of the prism. Additionally, since different frequencies of light have slightly different indexes of refraction in most materials, refraction can be used to produce dispersion spectra that appear as rainbows. The discovery of this phenomenon when passing light through a prism is famously attributed to Isaac Newton. [3]

Some media have an index of refraction which varies gradually with position and, thus, light rays curve through the medium rather than travel in straight lines. This effect is what is responsible for mirages seen on hot days where the changing index of refraction of the air causes the light rays to bend creating the appearance of specular reflections in the distance (as if on the surface of a pool of water). Material that has a varying index of refraction is called a gradient-index (GRIN) material and has many useful properties used in modern optical scanning technologies including photocopiers and scanners. The phenomenon is studied in the field of gradient-index optics. [4]

A device which produces converging or diverging light rays due to refraction is known as a lens. Thin lenses produce focal points on either side that can be modeled using the lensmaker's equation. [5] In general, two types of lenses exist: convex lenses, which cause parallel light rays to converge, and concave lenses, which cause parallel light rays to diverge. The detailed prediction of how images are produced by these lenses can be made using ray-tracing similar to curved mirrors. Similarly to curved mirrors, thin lenses follow a simple equation that determines the location of the images given a particular focal length (${\displaystyle f}$) and object distance (${\displaystyle S_{1}}$):

${\displaystyle {\frac {1}{S_{1}}}+{\frac {1}{S_{2}}}={\frac {1}{f}}}$

where ${\displaystyle S_{2}}$ is the distance associated with the image and is considered by convention to be negative if on the same side of the lens as the object and positive if on the opposite side of the lens. [5] The focal length f is considered negative for concave lenses.

Incoming parallel rays are focused by a convex lens into an inverted real image one focal length from the lens, on the far side of the lens.

Rays from an object at finite distance are focused further from the lens than the focal distance; the closer the object is to the lens, the further the image is from the lens. With concave lenses, incoming parallel rays diverge after going through the lens, in such a way that they seem to have originated at an upright virtual image one focal length from the lens, on the same side of the lens that the parallel rays are approaching on.

Rays from an object at finite distance are associated with a virtual image that is closer to the lens than the focal length, and on the same side of the lens as the object. The closer the object is to the lens, the closer the virtual image is to the lens.

Likewise, the magnification of a lens is given by

${\displaystyle M=-{\frac {S_{2}}{S_{1}}}={\frac {f}{f-S_{1}}}}$

where the negative sign is given, by convention, to indicate an upright object for positive values and an inverted object for negative values. Similar to mirrors, upright images produced by single lenses are virtual while inverted images are real. [3]

Lenses suffer from aberrations that distort images and focal points. These are due to both to geometrical imperfections and due to the changing index of refraction for different wavelengths of light (chromatic aberration). [3]

## Underlying mathematics

As a mathematical study, geometrical optics emerges as a short-wavelength limit for solutions to hyperbolic partial differential equations (Sommerfeld–Runge method) or as a property of propagation of field discontinuities according to Maxwell's equations (Luneburg method). In this short-wavelength limit, it is possible to approximate the solution locally by

${\displaystyle u(t,x)\approx a(t,x)e^{i(k\cdot x-\omega t)}}$

where ${\displaystyle k,\omega }$ satisfy a dispersion relation, and the amplitude ${\displaystyle a(t,x)}$ varies slowly. More precisely, the leading order solution takes the form

${\displaystyle a_{0}(t,x)e^{i\varphi (t,x)/\varepsilon }.}$

The phase ${\displaystyle \varphi (t,x)/\varepsilon }$ can be linearized to recover large wavenumber ${\displaystyle k:=\nabla _{x}\varphi }$, and frequency ${\displaystyle \omega$ :=-\partial _{t}\varphi }. The amplitude ${\displaystyle a_{0}}$ satisfies a transport equation. The small parameter ${\displaystyle \varepsilon \,}$ enters the scene due to highly oscillatory initial conditions. Thus, when initial conditions oscillate much faster than the coefficients of the differential equation, solutions will be highly oscillatory, and transported along rays. Assuming coefficients in the differential equation are smooth, the rays will be too. In other words, refraction does not take place. The motivation for this technique comes from studying the typical scenario of light propagation where short wavelength light travels along rays that minimize (more or less) its travel time. Its full application requires tools from microlocal analysis.

### Sommerfeld–Runge method

The method of obtaining equations of geometrical optics by taking the limit of zero wavelength was first described by Arnold Sommerfeld and J. Runge in 1911. [6] Their derivation was based on an oral remark by Peter Debye. [7] [8] Consider a monochromatic scalar field ${\displaystyle \psi (\mathbf {r} ,t)=\phi (\mathbf {r} )e^{i\omega t}}$, where ${\displaystyle \psi }$ could be any of the components of electric or magnetic field and hence the function ${\displaystyle \phi }$ satisfy the wave equation

${\displaystyle \nabla ^{2}\phi +k_{o}^{2}n(\mathbf {r} )\phi =0}$

where ${\displaystyle k_{o}=\omega /c=2\pi /\lambda _{o}}$ with ${\displaystyle c}$ being the speed of light in vacuum. Here, ${\displaystyle n(\mathbf {r} )}$ is the refractive index of the medium. Without loss of generality, let us introduce ${\displaystyle \phi =A(k_{o},\mathbf {r} )e^{ik_{o}S(\mathbf {r} )}}$ to convert the equation to

${\displaystyle -k_{o}^{2}A[(\nabla S)^{2}-n^{2}]+2ik_{o}(\nabla S\cdot \nabla A)+ik_{o}A\nabla ^{2}S+\nabla ^{2}A=0.}$

Since the underlying principle of geometrical optics lies in the limit ${\displaystyle \lambda _{o}\sim k_{o}^{-1}\rightarrow 0}$, the following asymptotic series is assumed,

${\displaystyle A(k_{o},\mathbf {r} )=\sum _{m=0}^{\infty }{\frac {A_{m}(\mathbf {r} )}{(ik_{o})^{m}}}}$

For large but finite value of ${\displaystyle k_{o}}$, the series diverges, and one has to be careful in keeping only appropriate first few terms. For each value of ${\displaystyle k_{o}}$, one can find an optimum number of terms to be kept and adding more terms than the optimum number might result in a poorer approximation. [9] Substituting the series into the equation and collecting terms of different orders, one finds

{\displaystyle {\begin{aligned}O(k_{o}^{2}):&\quad (\nabla S)^{2}=n^{2},\\O(k_{o}):&\quad 2\nabla S\cdot \nabla A_{0}+A_{0}\nabla ^{2}S=0,\\O(1):&\quad 2\nabla S\cdot \nabla A_{1}+A_{1}\nabla ^{2}S=-\nabla ^{2}A_{0},\end{aligned}}}

in general,

${\displaystyle O(k_{o}^{1-m}):\quad 2\nabla S\cdot \nabla A_{m}+A_{m}\nabla ^{2}S=-\nabla ^{2}A_{m-1}.}$

The first equation is known as the eikonal equation , which determines the eikonal${\displaystyle S(\mathbf {r} )}$ is a Hamilton–Jacobi equation, written for example in Cartesian coordinates becomes

${\displaystyle \left({\frac {\partial S}{\partial x}}\right)^{2}+\left({\frac {\partial S}{\partial y}}\right)^{2}+\left({\frac {\partial S}{\partial z}}\right)^{2}=n^{2}.}$

The remaining equations determine the functions ${\displaystyle A_{m}(\mathbf {r} )}$.

### Luneburg method

The method of obtaining equations of geometrical optics by analysing surfaces of discontinuities of solutions to Maxwell's equations was first described by Rudolf Karl Luneburg in 1944. [10] It does not restrict the electromagnetic field to have a special form (in the Sommerfeld-Runge method it is not clear that a field whose amplitude ${\displaystyle \phi }$ is made to depend on ${\displaystyle \omega }$ would still yield the eikonal equation, i.e., a geometrical optics wave front). The main conclusion of this approach is the following:

Theorem. Suppose the fields ${\displaystyle \mathbf {\vec {E}} (x,y,z,t)}$ and ${\displaystyle \mathbf {\vec {H}} (x,y,z,t)}$ (in a linear isotropic medium described by dielectric constants ${\displaystyle \varepsilon (x,y,z)}$ and ${\displaystyle \mu (x,y,z)}$) have finite discontinuities along a (moving) surface in ${\displaystyle \mathbf {R} ^{3}}$ described by the equation ${\displaystyle \psi (x,y,z)-ct=0}$. Then Maxwell's equations in the integral form imply that ${\displaystyle \psi }$ satisfies the eikonal equation:

${\displaystyle \psi _{x}^{2}+\psi _{y}^{2}+\psi _{z}^{2}=\varepsilon \mu =n^{2}}$,

where ${\displaystyle n}$ is the index of refraction of the medium (Gaussian units).

An example of such surface of discontinuity is the initial wave front emanating from a source that starts radiating at a certain instant of time.

The surfaces of field discontinuity thus become geometrical optics wave fronts with the corresponding geometrical optics fields defined as:

${\displaystyle \mathbf {\vec {E}} ^{*}(x,y,z)=\mathbf {\vec {E}} (x,y,z,\psi (x,y,z)/c)}$
${\displaystyle \mathbf {\vec {H}} ^{*}(x,y,z)=\mathbf {\vec {H}} (x,y,z,\psi (x,y,z)/c)}$

Those fields obey transport equations consistent with the transport equations of the Sommerfeld-Runge approach. Light rays in Luneburg's theory are defined as trajectories orthogonal to the discontinuity surfaces and with the right parametrisation they can be shown to obey Fermat's principle of least time thus establishing the identity of those rays with light rays of standard optics.

The above developments can be generalised to anisotropic media. [11]

The proof of Luneburg's theorem is based on investigating how Maxwell's equations govern the propagation of discontinuities of solutions. The basic technical lemma is as follows:

A technical lemma. Let ${\displaystyle \varphi (x,y,z,t)=0}$ be a hypersurface (a 3-dimensional manifold) in spacetime ${\displaystyle \mathbf {R} ^{4}}$ on which one or more of: ${\displaystyle \mathbf {\vec {E}} (x,y,z,t)}$, ${\displaystyle \mathbf {\vec {H}} (x,y,z,t)}$, ${\displaystyle \varepsilon (x,y,z)}$, ${\displaystyle \mu (x,y,z)}$, have a finite discontinuity. Then at each point of the hypersurface the following formulas hold:

${\displaystyle \nabla \varphi \times [\mathbf {\vec {H}} ]-{1 \over c}\,\varphi _{t}\,[\varepsilon \mathbf {\vec {E}} ]=0}$
${\displaystyle \nabla \varphi \times [\mathbf {\vec {E}} ]+{1 \over c}\,\varphi _{t}\,[\mu \mathbf {\vec {H}} ]=0}$
${\displaystyle \nabla \cdot [\varepsilon \mathbf {\vec {E}} ]=0}$
${\displaystyle \nabla \cdot [\mu \mathbf {\vec {H}} ]=0}$

where the ${\displaystyle \nabla }$ operator acts in the ${\displaystyle xyz}$-space (for every fixed ${\displaystyle t}$) and the square brackets denote the difference in values on both sides of the discontinuity surface (set up according to an arbitrary but fixed convention, e.g. the gradient ${\displaystyle \nabla \varphi }$ pointing in the direction of the quantities being subtracted from).

Sketch of proof. Start with Maxwell's equations away from the sources (Gaussian units):

${\displaystyle \nabla \cdot \varepsilon \mathbf {\vec {E}} =0}$
${\displaystyle \nabla \cdot \mu \mathbf {\vec {H}} =0}$
${\displaystyle \nabla \times \mathbf {\vec {E}} +{\mu \over c}\,\mathbf {\vec {H}} _{t}=0}$
${\displaystyle \nabla \times \mathbf {\vec {H}} -{\varepsilon \over c}\,\mathbf {\vec {E}} _{t}=0}$

Using Stokes' theorem in ${\displaystyle \mathbf {R} ^{4}}$ one can conclude from the first of the above equations that for any domain ${\displaystyle D}$ in ${\displaystyle \mathbf {R} ^{4}}$ with a piecewise smooth boundary ${\displaystyle \Gamma }$ the following is true:

${\displaystyle \oint _{\Gamma }(\mathbf {\vec {M}} \cdot \varepsilon \mathbf {\vec {E}} )\,dS=0}$

where ${\displaystyle \mathbf {\vec {M}} =(x_{N},y_{N},z_{N})}$ is the projection of the outward unit normal ${\displaystyle (x_{N},y_{N},z_{N},t_{N})}$ of ${\displaystyle \Gamma }$ onto the 3D slice ${\displaystyle t={\rm {const}}}$, and ${\displaystyle dS}$ is the volume 3-form on ${\displaystyle \Gamma }$. Similarly, one establishes the following from the remaining Maxwell's equations:

${\displaystyle \oint _{\Gamma }(\mathbf {\vec {M}} \cdot \mu \mathbf {\vec {H}} )\,dS=0}$
${\displaystyle \oint _{\Gamma }(\mathbf {\vec {M}} \times \mathbf {\vec {E}} +{\mu \over c}\,t_{N}\,\mathbf {\vec {H}} )\,dS=0}$
${\displaystyle \oint _{\Gamma }(\mathbf {\vec {M}} \times \mathbf {\vec {H}} -{\varepsilon \over c}\,t_{N}\,\mathbf {\vec {E}} )\,dS=0}$

Now by considering arbitrary small sub-surfaces ${\displaystyle \Gamma _{0}}$ of ${\displaystyle \Gamma }$ and setting up small neighbourhoods surrounding ${\displaystyle \Gamma _{0}}$ in ${\displaystyle \mathbf {R} ^{4}}$, and subtracting the above integrals accordingly, one obtains:

${\displaystyle \int _{\Gamma _{0}}(\nabla \varphi \cdot [\varepsilon \mathbf {\vec {E}} ])\,{dS \over \|\nabla ^{4D}\varphi \|}=0}$
${\displaystyle \int _{\Gamma _{0}}(\nabla \varphi \cdot [\mu \mathbf {\vec {H}} ])\,{dS \over \|\nabla ^{4D}\varphi \|}=0}$
${\displaystyle \int _{\Gamma _{0}}\left(\nabla \varphi \times [\mathbf {\vec {H}} ]-{1 \over c}\,\varphi _{t}\,[\varepsilon \mathbf {\vec {E}} ]\right)\,{dS \over \|\nabla ^{4D}\varphi \|}=0}$
${\displaystyle \int _{\Gamma _{0}}\left(\nabla \varphi \times [\mathbf {\vec {E}} ]+{1 \over c}\,\varphi _{t}\,[\mu \mathbf {\vec {H}} ]\right)\,{dS \over \|\nabla ^{4D}\varphi \|}=0}$

where ${\displaystyle \nabla ^{4D}}$ denotes the gradient in the 4D ${\displaystyle xyzt}$-space. And since ${\displaystyle \Gamma _{0}}$ is arbitrary, the integrands must be equal to 0 which proves the lemma.

It's now easy to show that as they propagate through a continuous medium, the discontinuity surfaces obey the eikonal equation. Specifically, if ${\displaystyle \varepsilon }$ and ${\displaystyle \mu }$ are continuous, then the discontinuities of ${\displaystyle \mathbf {\vec {E}} }$ and ${\displaystyle \mathbf {\vec {H}} }$ satisfy: ${\displaystyle [\varepsilon \mathbf {\vec {E}} ]=\varepsilon [\mathbf {\vec {E}} ]}$ and ${\displaystyle [\mu \mathbf {\vec {H}} ]=\mu [\mathbf {\vec {H}} ]}$. In this case the first two equations of the lemma can be written as:

${\displaystyle \nabla \varphi \times [\mathbf {\vec {H}} ]-{\varepsilon \over c}\,\varphi _{t}\,[\mathbf {\vec {E}} ]=0}$
${\displaystyle \nabla \varphi \times [\mathbf {\vec {E}} ]+{\mu \over c}\,\varphi _{t}\,[\mathbf {\vec {H}} ]=0}$

Taking cross product of the first equation with ${\displaystyle \nabla \varphi }$ and substituting the second yields:

${\displaystyle \nabla \varphi \times (\nabla \varphi \times [\mathbf {\vec {H}} ])-{\varepsilon \over c}\,\varphi _{t}\,(\nabla \varphi \times [\mathbf {\vec {E}} ])=(\nabla \varphi \cdot [\mathbf {\vec {H}} ])\,\nabla \varphi -\|\nabla \varphi \|^{2}\,[\mathbf {\vec {H}} ]+{\varepsilon \mu \over c^{2}}\varphi _{t}^{2}\,[\mathbf {\vec {H}} ]=0}$

By the second of Maxwell's equations, ${\displaystyle \nabla \varphi \cdot [\mathbf {\vec {H}} ]=0}$, hence, for points lying on the surface ${\displaystyle \varphi =0}$only:

${\displaystyle \|\nabla \varphi \|^{2}={\varepsilon \mu \over c^{2}}\varphi _{t}^{2}}$

(Notice the presence of the discontinuity is essential in this step as we'd be dividing by zero otherwise.)

Because of the physical considerations one can assume without loss of generality that ${\displaystyle \varphi }$ is of the following form: ${\displaystyle \varphi (x,y,z,t)=\psi (x,y,z)-ct}$, i.e. a 2D surface moving through space, modelled as level surfaces of ${\displaystyle \psi }$. (Mathematically ${\displaystyle \psi }$ exists if ${\displaystyle \varphi _{t}\neq 0}$ by the implicit function theorem.) The above equation written in terms of ${\displaystyle \psi }$ becomes:

${\displaystyle \|\nabla \psi \|^{2}={\varepsilon \mu \over c^{2}}\,(-c)^{2}=\varepsilon \mu =n^{2}}$

i.e.,

${\displaystyle \psi _{x}^{2}+\psi _{y}^{2}+\psi _{z}^{2}=n^{2}}$

which is the eikonal equation and it holds for all ${\displaystyle x}$, ${\displaystyle y}$, ${\displaystyle z}$, since the variable ${\displaystyle t}$ is absent. Other laws of optics like Snell's law and Fresnel formulae can be similarly obtained by considering discontinuities in ${\displaystyle \varepsilon }$ and ${\displaystyle \mu }$.

### General equation using four-vector notation

In four-vector notation used in special relativity, the wave equation can be written as

${\displaystyle {\frac {\partial ^{2}\psi }{\partial x_{i}\partial x^{i}}}=0}$

and the substitution ${\displaystyle \psi =Ae^{iS/\epsilon }}$ leads to [12]

${\displaystyle -{\frac {A}{\epsilon ^{2}}}{\frac {\partial S}{\partial x_{i}}}{\frac {\partial S}{\partial x^{i}}}+{\frac {2i}{\epsilon }}{\frac {\partial A}{\partial x_{i}}}{\frac {\partial S}{\partial x^{i}}}+{\frac {iA}{\epsilon }}{\frac {\partial ^{2}S}{\partial x_{i}\partial x^{i}}}+{\frac {\partial ^{2}A}{\partial x_{i}\partial x^{i}}}=0.}$

Therefore the eikonal equation is given by

${\displaystyle {\frac {\partial S}{\partial x_{i}}}{\frac {\partial S}{\partial x^{i}}}=0.}$

Once eikonal is found by solving the above equation, the wave four-vector can be found from

${\displaystyle k_{i}=-{\frac {\partial S}{\partial x^{i}}}.}$

## Related Research Articles

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

In physics, the Navier–Stokes equations are certain partial differential equations which describe the motion of viscous fluid substances, named after French engineer and physicist Claude-Louis Navier and Anglo-Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).

Noether's theorem or Noether's first theorem states that every differentiable symmetry of the action of a physical system with conservative forces has a corresponding conservation law. The theorem was proven by mathematician Emmy Noether in 1915 and published in 1918, after a special case was proven by E. Cosserat and F. Cosserat in 1909. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries over physical space.

The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.

In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , , or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).

Poisson's equation is an elliptic partial differential equation of broad utility in theoretical physics. For example, the solution to Poisson's equation is the potential field caused by a given electric charge or mass density distribution; with the potential field known, one can then calculate electrostatic or gravitational (force) field. It is a generalization of Laplace's equation, which is also frequently seen in physics. The equation is named after French mathematician and physicist Siméon Denis Poisson.

The Klein–Gordon equation is a relativistic wave equation, related to the Schrödinger equation. It is second-order in space and time and manifestly Lorentz-covariant. It is a quantized version of the relativistic energy–momentum relation. Its solutions include a quantum scalar or pseudoscalar field, a field whose quanta are spinless particles. Its theoretical relevance is similar to that of the Dirac equation. Electromagnetic interactions can be incorporated, forming the topic of scalar electrodynamics, but because common spinless particles like the pions are unstable and also experience the strong interaction the practical utility is limited.

In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a functional to a change in a function on which the functional depends.

The path integral formulation is a description in quantum mechanics that generalizes the action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.

In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics. The Hamilton–Jacobi equation is particularly useful in identifying conserved quantities for mechanical systems, which may be possible even when the mechanical problem itself cannot be solved completely.

In differential geometry, the four-gradient is the four-vector analogue of the gradient from vector calculus.

In electromagnetism, the Lorenz gauge condition or Lorenz gauge, for Ludvig Lorenz, is a partial gauge fixing of the electromagnetic vector potential by requiring The name is frequently confused with Hendrik Lorentz, who has given his name to many concepts in this field. The condition is Lorentz invariant. The condition does not completely determine the gauge: one can still make a gauge transformation where is a harmonic scalar function. The Lorenz condition is used to eliminate the redundant spin-0 component in the (1/2, 1/2) representation theory of the Lorentz group. It is equally used for massive spin-1 fields where the concept of gauge transformations does not apply at all.

In quantum mechanics, the momentum operator is the operator associated with the linear momentum. The momentum operator is, in the position representation, an example of a differential operator. For the case of one particle in one spatial dimension, the definition is:

The following are important identities involving derivatives and integrals in vector calculus.

There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.

The intent of this article is to highlight the important points of the derivation of the Navier–Stokes equations as well as its application and formulation for different families of fluids.

In general relativity, a point mass deflects a light ray with impact parameter by an angle approximately equal to

In fluid dynamics, the Oseen equations describe the flow of a viscous and incompressible fluid at small Reynolds numbers, as formulated by Carl Wilhelm Oseen in 1910. Oseen flow is an improved description of these flows, as compared to Stokes flow, with the (partial) inclusion of convective acceleration.

In electromagnetism, a branch of fundamental physics, the matrix representations of the Maxwell's equations are a formulation of Maxwell's equations using matrices, complex numbers, and vector calculus. These representations are for a homogeneous medium, an approximation in an inhomogeneous medium. A matrix representation for an inhomogeneous medium was presented using a pair of matrix equations. A single equation using 4 × 4 matrices is necessary and sufficient for any homogeneous medium. For an inhomogeneous medium it necessarily requires 8 × 8 matrices.

Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.

## References

1. Arthur Schuster, An Introduction to the Theory of Optics, London: Edward Arnold, 1904 online.
2. Greivenkamp, John E. (2004). Field Guide to Geometrical Optics. SPIE Field Guides. Vol. 1. SPIE. pp. 19–20. ISBN   0-8194-5294-7.
3. Hugh D. Young (1992). . Addison-Wesley. ISBN   0-201-52981-5. Chapter 35.
4. E. W. Marchand, Gradient Index Optics, New York, NY, Academic Press, 1978.
5. Hecht, Eugene (1987). Optics (2nd ed.). Addison Wesley. ISBN   0-201-11609-X. Chapters 5 & 6.
6. Sommerfeld, A., & Runge, J. (1911). Anwendung der Vektorrechnung auf die Grundlagen der geometrischen Optik. Annalen der Physik, 340(7), 277-298.
7. Born, M., & Wolf, E. (2013). Principles of optics: electromagnetic theory of propagation, interference and diffraction of light. Elsevier.
8. Borowitz, S. (1967). Fundamentals of quantum mechanics, particles, waves, and wave mechanics.
9. Luneburg, R. K., Methematical Theory of Optics, Brown University Press 1944 [mimeographed notes], University of California Press 1964
10. Kline, M., Kay, I. W., Electromagnetic Theory and Geometrical Optics, Interscience Publishers 1965
11. Landau, L. D., & Lifshitz, E. M. (1975). The classical theory of fields.