Talent scheduling represents a complex optimization challenge within the fields of computer science and operations research, specifically categorized under combinatorial optimization. Consider, for example, a case involving the production of multiple films, each comprising several scenes that necessitate the participation of one or more actors. Importantly, only one scene can be filmed per day, and the remuneration for the actors is calculated on a daily basis. A critical constraint in this problem is that actors must be engaged for consecutive days; for instance, an actor cannot be contracted for filming on the first and third days without also being hired on the intervening second day. Furthermore, during the entire hiring period, producers are obligated to compensate the actors, even on days when they are not actively participating in filming. The primary objective of talent scheduling is to minimize the total salary expenditure for the actors by optimizing the sequence in which scenes are filmed. [1]
Consider a film shoot composed of shooting days and involving a total of actors. Then we use the day out of days matrix (DODM) to represent the requirements for the various shooting days. The matrix with the entry given by:
Then we define the pay vector , with the th element given by which means rate of pay per day of the th actor. Let v denote any permutation of the n columns of , we have:
is the permutation set of the n shooting days. Then define to be the matrix with its columns permuted according to , we have:
Then we use and to represent denote respectively the earliest and latest days in the schedule determined by a which require actor . So we can find actor will be hired for days. But in these days, only days are actually required, which means days are unnecessary, we have:
The total cost of unnecessary days is:
will be the objective function we should minimize. [1]
It can be proved that the talent scheduling problem is NP-hard by a reduction to the optimal linear arrangement(OLA) problem. [2] Even if we restrict the problem by requiring that each actor is needed for just two days and all actors' salaries are 1, it's still polynomially reducible to the OLA problem. Thus, this problem is unlikely to have pseudo-polynomial algorithm. [3]
The integer programming model is given by: [4]
Minimize | |||
subject to | |||
In this model, means the earliest shooting day for talent , is the latest shooting day for talent , is the scheduling for the project, i.e.
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix. Both are special cases of a more general function of a matrix called the immanant.
In mathematics, particularly in matrix theory, a permutation matrix is a square binary matrix that has exactly one entry of 1 in each row and each column with all other entries 0. An n × n permutation matrix can represent a permutation of n elements. Pre-multiplying an n-row matrix M by a permutation matrix P, forming PM, results in permuting the rows of M, while post-multiplying an n-column matrix M, forming MP, permutes the columns of M.
The Ising model, named after the physicists Ernst Ising and Wilhelm Lenz, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic "spins" that can be in one of two states. The spins are arranged in a graph, usually a lattice, allowing each spin to interact with its neighbors. Neighboring spins that agree have a lower energy than those that disagree; the system tends to the lowest energy but heat disturbs this tendency, thus creating the possibility of different structural phases. The model allows the identification of phase transitions as a simplified model of reality. The two-dimensional square-lattice Ising model is one of the simplest statistical models to show a phase transition.
In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.
In mathematics, in particular functional analysis, the singular values of a compact operator acting between Hilbert spaces and , are the square roots of the eigenvalues of the self-adjoint operator .
Modern portfolio theory (MPT), or mean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk. It is a formalization and extension of diversification in investing, the idea that owning different kinds of financial assets is less risky than owning only one type. Its key insight is that an asset's risk and return should not be assessed by itself, but by how it contributes to a portfolio's overall risk and return. The variance of return is used as a measure of risk, because it is tractable when assets are combined into portfolios. Often, the historical variance and covariance of returns is used as a proxy for the forward-looking versions of these quantities, but other, more sophisticated methods are available.
In mechanics, virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement is different for different displacements. Among all the possible displacements that a particle may follow, called virtual displacements, one will minimize the action. This displacement is therefore the displacement followed by the particle according to the principle of least action.
The work of a force on a particle along a virtual displacement is known as the virtual work.
An estimation procedure that is often claimed to be part of Bayesian statistics is the maximum a posteriori (MAP) estimate of an unknown quantity, that equals the mode of the posterior density with respect to some reference measure, typically the Lebesgue measure. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to the method of maximum likelihood (ML) estimation, but employs an augmented optimization objective which incorporates a prior density over the quantity one wants to estimate. MAP estimation is therefore a regularization of maximum likelihood estimation, so is not a well-defined statistic of the Bayesian posterior distribution.
In solid-state physics, the tight-binding model is an approach to the calculation of electronic band structure using an approximate set of wave functions based upon superposition of wave functions for isolated atoms located at each atomic site. The method is closely related to the LCAO method used in chemistry. Tight-binding models are applied to a wide variety of solids. The model gives good qualitative results in many cases and can be combined with other models that give better results where the tight-binding model fails. Though the tight-binding model is a one-electron model, the model also provides a basis for more advanced calculations like the calculation of surface states and application to various kinds of many-body problem and quasiparticle calculations.
In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression of the determinant of an n × n-matrix B as a weighted sum of minors, which are the determinants of some (n − 1) × (n − 1)-submatrices of B. Specifically, for every i, the Laplace expansion along the ith row is the equality where is the entry of the ith row and jth column of B, and is the determinant of the submatrix obtained by removing the ith row and the jth column of B. Similarly, the Laplace expansion along the jth column is the equality (Each identity implies the other, since the determinants of a matrix and its transpose are the same.)
Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation and selection: in each generation (iteration) new individuals are generated by variation of the current parental individuals, usually in a stochastic way. Then, some individuals are selected to become the parents in the next generation based on their fitness or objective function value . Like this, individuals with better and better -values are generated over the generation sequence.
In mathematics, a form (i.e. a homogeneous polynomial) h(x) of degree 2m in the real n-dimensional vector x is sum of squares of forms (SOS) if and only if there exist forms of degree m such that
In mathematics and theoretical physics, Noether's second theorem relates symmetries of an action functional with a system of differential equations. The theorem is named after its discoverer, Emmy Noether.
In mathematics, the multiple zeta functions are generalizations of the Riemann zeta function, defined by
In computer networks, self-similarity is a feature of network data transfer dynamics. When modeling network data dynamics the traditional time series models, such as an autoregressive moving average model are not appropriate. This is because these models only provide a finite number of parameters in the model and thus interaction in a finite time window, but the network data usually have a long-range dependent temporal structure. A self-similar process is one way of modeling network data dynamics with such a long range correlation. This article defines and describes network data transfer dynamics in the context of a self-similar process. Properties of the process are shown and methods are given for graphing and estimating parameters modeling the self-similarity of network data.
In mathematics, the Lindström–Gessel–Viennot lemma provides a way to count the number of tuples of non-intersecting lattice paths, or, more generally, paths on a directed graph. It was proved by Gessel–Viennot in 1985, based on previous work of Lindström published in 1973. The lemma is named after Bernt Lindström, Ira Gessel and Gérard Viennot.
In mathematics, low-rank approximation refers to the process of approximating a given matrix by a matrix of lower rank. More precisely, it is a minimization problem, in which the cost function measures the fit between a given matrix and an approximating matrix, subject to a constraint that the approximating matrix has reduced rank. The problem is used for mathematical modeling and data compression. The rank constraint is related to a constraint on the complexity of a model that fits the data. In applications, often there are other constraints on the approximating matrix apart from the rank constraint, e.g., non-negativity and Hankel structure.
The generalized distributive law (GDL) is a generalization of the distributive property which gives rise to a general message passing algorithm. It is a synthesis of the work of many authors in the information theory, digital communications, signal processing, statistics, and artificial intelligence communities. The law and algorithm were introduced in a semi-tutorial by Srinivas M. Aji and Robert J. McEliece with the same title.
In Computer Science, Optimal Computing Budget Allocation (OCBA) is a simulation optimization method designed to maximize the Probability of Correct Selection (PCS) while minimizing computational costs. First introduced by Dr. Chun-Hung Chen in the mid-1990s, OCBA determines how many simulation runs (or how much computational time) or the number of replications each design alternative needs to identify the best option while using as few resources as possible. It works by focusing more on alternatives that are harder to evaluate, such as those with higher uncertainty or close performance to the best option.