Varignon frame

Last updated
Varignon frame Varignon-app.svg
Varignon frame

The Varignon frame, named after Pierre Varignon, is a mechanical device which can be used to determine an optimal location of a warehouse for the distribution of goods to a set of shops. Optimal means that the sum of the weighted distances of the shops to the warehouse should be minimal. The frame consists of a board with n holes corresponding to the n shops at the locations , n strings are tied together in a knot at one end, the loose ends are passed, one each, through the holes and are attached to weights below the board (see diagram). If the influence of friction and other odds of the real world are neglected, the knot will take a position of equilibrium . It can be shown (see below), that point is the optimal location which minimizes the weighted sum of distances

Contents

(1):.

The optimization problem is called Weber problem. [1]

Mechanical Problem - Optimization Problem

At point
v
{\displaystyle \mathbf {v} }
the sum of all forces is 0 Varignon-ex-5-f.svg
At point the sum of all forces is 0

If the holes have locations and the masses of the weights are then the force acting at the i-th string has the magnitude (: constant of gravity) and direction (unitvector). Summing up all forces and cancelling the common term one gets the equation

(2):.

(At the point of equilibrium the sum of all forces is zero !)

This is a non-linear system for the coordinates of point which can be solved iteratively by the Weiszfeld-algorithm (see below) [2]

The connection between equation (1) and equation (2) is:

(3):

Hence Function has at point a local extremum and the Varignon frame provides the optimal location experimentally.

Varignon frame: example Varignon-ex-5.svg
Varignon frame: example
Level curves Varignon-ex-5-niv-c.svg
Level curves

Example

For the following example the points are

and the weights

.

The coordinates of the optimal solution (red) are and the optimal weighted sum of lengths is . The second picture shows level curves which consist of points of equal but not optimal sums. Level curves can be used for assigning areas, where the weighted sums do not exceed a fixed level. Geometrically they are implicit curves with equations

(see equation (1)).
Case
n
=
2
,
m
1
=
m
2
=
1
{\displaystyle n=2,m_{1}=m_{2}=1}
, the level curves are confocal ellipses Varignon-n2-niv-c.svg
Case , the level curves are confocal ellipses

Special cases n=1 und n=2

Weiszfeld-algorithm and a fixpoint problem

Iteration as fixpoint determination for the example: starting point
v
0
=
(
25
,
15
)
{\displaystyle \mathbf {v} _{0}=(25,15)}
(green), starting point
v
m
{\displaystyle \mathbf {v} _{m}}
(blue) is the center of mass Varignon-fixp.svg
Iteration as fixpoint determination for the example: starting point (green), starting point (blue) is the center of mass

Replacing in formula (2) vector in the nominator by and in the denominator by and solving the equation for one gets: [3]

(4):

which describes an iteration. A suitable starting point is the center of mass with mass in point :

.

This algorithm is called Weiszfeld-algorithm. [4]

Formula (4) can be seen as the iteration formula for determining the fixed point of function

(5)

with fixpoint equation

(see fixed point)

Remark on numerical problems:
The iteration algorithm described here may have numerical problems if point is close to one of the points .

See also

Related Research Articles

<span class="mw-page-title-main">Dynamic programming</span> Problem optimization method

Dynamic programming is both a mathematical optimization method and an algorithmic paradigm. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.

<span class="mw-page-title-main">Center of mass</span> Unique point where the weighted relative position of the distributed mass sums to zero

In physics, the center of mass of a distribution of mass in space is the unique point at any given time where the weighted relative position of the distributed mass sums to zero. For a rigid body containing its center of mass, this is the point to which a force may be applied to cause a linear acceleration without an angular acceleration. Calculations in mechanics are often simplified when formulated with respect to the center of mass. It is a hypothetical point where the entire mass of an object may be assumed to be concentrated to visualise its motion. In other words, the center of mass is the particle equivalent of a given object for application of Newton's laws of motion.

<span class="mw-page-title-main">Hamiltonian mechanics</span> Formulation of classical mechanics using momenta

In physics, Hamiltonian mechanics is a reformulation of Lagrangian mechanics that emerged in 1833. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena.

<span class="mw-page-title-main">Expectation–maximization algorithm</span> Iterative method for finding maximum likelihood estimates in statistical models

In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture of gaussians, or to solve the multiple linear regression problem.

In mathematics and computing, the Levenberg–Marquardt algorithm, also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The LMA interpolates between the Gauss–Newton algorithm (GNA) and the method of gradient descent. The LMA is more robust than the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed as Gauss–Newton using a trust region approach.

<span class="mw-page-title-main">Nonlinear regression</span> Regression analysis

In statistics, nonlinear regression is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations (iterations).

<span class="mw-page-title-main">Gauss–Newton algorithm</span> Mathematical algorithm

The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for solving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required.

Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes:

  1. To provide an analytical approximation to the posterior probability of the unobserved variables, in order to do statistical inference over these variables.
  2. To derive a lower bound for the marginal likelihood of the observed data. This is typically used for performing model selection, the general idea being that a higher marginal likelihood for a given model indicates a better fit of the data by that model and hence a greater probability that the model in question was the one that generated the data.
<span class="mw-page-title-main">Conjugate gradient method</span> Mathematical optimization algorithm

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.

k-means clustering is a method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells. k-means clustering minimizes within-cluster variances, but not regular Euclidean distances, which would be the more difficult Weber problem: the mean optimizes squared errors, whereas only the geometric median minimizes Euclidean distances. For instance, better Euclidean solutions can be found using k-medians and k-medoids.

<span class="mw-page-title-main">Richardson–Lucy deconvolution</span> Procedure for recovering a blurred image

The Richardson–Lucy algorithm, also known as Lucy–Richardson deconvolution, is an iterative procedure for recovering an underlying image that has been blurred by a known point spread function. It was named after William Richardson and Leon B. Lucy, who described it independently.

In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.

The cross-entropy (CE) method is a Monte Carlo method for importance sampling and optimization. It is applicable to both combinatorial and continuous problems, with either a static or noisy objective.

In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points. A restraint algorithm is used to ensure that the distance between mass points is maintained. The general steps involved are: (i) choose novel unconstrained coordinates, (ii) introduce explicit constraint forces, (iii) minimize constraint forces implicitly by the technique of Lagrange multipliers or projection methods.

<span class="mw-page-title-main">Geometric median</span> Point minimizing sum of distances to given points

In geometry, the geometric median of a discrete point set in a Euclidean space is the point minimizing the sum of distances to the sample points. This generalizes the median, which has the property of minimizing the sum of distances or absolute differences for one-dimensional data. It is also known as the spatial median, Euclidean minisum point, Torricelli point, or 1-median. It provides a measure of central tendency in higher dimensions and it is a standard problem in facility location, i.e., locating a facility to minimize the cost of transportation.

Phase retrieval is the process of algorithmically finding solutions to the phase problem. Given a complex spectrum , of amplitude , and phase :

Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors ().

In geometry, the Weber problem, named after Alfred Weber, is one of the most famous problems in location theory. It requires finding a point in the plane that minimizes the sum of the transportation costs from this point to n destination points, where different destination points are associated with different costs per unit distance.

<span class="mw-page-title-main">Point-set registration</span> Process of finding a spatial transformation that aligns two point clouds

In computer vision, pattern recognition, and robotics, point-set registration, also known as point-cloud registration or scan matching, is the process of finding a spatial transformation that aligns two point clouds. The purpose of finding such a transformation includes merging multiple data sets into a globally consistent model, and mapping a new measurement to a known data set to identify features or to estimate its pose. Raw 3D point cloud data are typically obtained from Lidars and RGB-D cameras. 3D point clouds can also be generated from computer vision algorithms such as triangulation, bundle adjustment, and more recently, monocular image depth estimation using deep learning. For 2D point set registration used in image processing and feature-based image registration, a point set may be 2D pixel coordinates obtained by feature extraction from an image, for example corner detection. Point cloud registration has extensive applications in autonomous driving, motion estimation and 3D reconstruction, object detection and pose estimation, robotic manipulation, simultaneous localization and mapping (SLAM), panorama stitching, virtual and augmented reality, and medical imaging.

In computer science, multiway number partitioning is the problem of partitioning a multiset of numbers into a fixed number of subsets, such that the sums of the subsets are as similar as possible. It was first presented by Ronald Graham in 1969 in the context of the identical-machines scheduling problem. The problem is parametrized by a positive integer k, and called k-way number partitioning. The input to the problem is a multiset S of numbers, whose sum is k*T.

References

  1. Z. Drezner, H.W. Hamacher: Facility Location, Springer, 2004, ISBN   3-540-21345-7, p. 7
  2. Horst W. Hamacher: Mathematische Lösungsverfahren für planare Standortprobleme, Vieweg+Teubner-Verlag, 2019, ISBN   978-3-663-01968-8, p. 31
  3. Karl-Werner Hansmann :Industrielles Management, De Gruyter Verlag, 2014, ISBN   9783486840827, S. 115
  4. see Facility location, p. 9