Look up quasilinear in Wiktionary, the free dictionary. |
Quasilinear may refer to:
In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form is a convex set. For a function of a single variable, along any stretch of the curve the highest point is one of the endpoints. The negative of a quasiconvex function is said to be quasiconcave.
In economics and consumer theory, quasilinear utility functions are linear in one argument, generally the numeraire. Quasilinear preferences can be represented by the utility function where is strictly concave. A nice property of the quasilinear utility function is that, the Marshallian/Walrasian demand for does not depend on wealth and therefore is not subject to a wealth effect. The absence of a wealth effect simplifies analysis and makes quasilinear utility functions a common choice for modelling. Furthermore, when utility is quasilinear, compensating variation (CV), equivalent variation (EV), and consumer surplus are algebraically equivalent. In mechanism design, quasilinear utility ensures that agents can compensate each other with side payments.
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Because such relations are extremely common, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.
disambiguation page lists articles associated with the title Quasilinear. If an internal link led you here, you may wish to change the link to point directly to the intended article. | This
In physics, a conservation law states that a particular measurable property of an isolated physical system does not change as the system evolves over time. Exact conservation laws include conservation of energy, conservation of linear momentum, conservation of angular momentum, and conservation of electric charge. There are also many approximate conservation laws, which apply to such quantities as mass, parity, lepton number, baryon number, strangeness, hypercharge, etc. These quantities are conserved in certain classes of physics processes, but not in all.
The derivative of a function of a real variable measures the sensitivity to change of the function value with respect to a change in its argument. Derivatives are a fundamental tool of calculus. For example, the derivative of the position of a moving object with respect to time is the object's velocity: this measures how quickly the position of the object changes when time advances.
In mathematics, an exponential function is a function of the form
In mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is, at every point of its domain, complex differentiable in a neighbourhood of the point. The existence of a complex derivative in a neighbourhood is a very strong condition, for it implies that any holomorphic function is actually infinitely differentiable and equal, locally, to its own Taylor series (analytic). Holomorphic functions are the central objects of study in complex analysis.
In mathematics, a Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values of the function's derivatives at a single point.
In mathematics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace who first studied its properties. This is often written as
A finite difference is a mathematical expression of the form f (x + b) − f (x + a). If a finite difference is divided by b − a, one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.
In mathematics, differential calculus is a subfield of calculus concerned with the study of the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus, the study of the area beneath a curve.
Fractional calculus is a branch of mathematical analysis that studies the several different possibilities of defining real number powers or complex number powers of the differentiation operator D
In mathematics, an implicit equation is a relation of the form , where is a function of several variables. For example, the implicit equation of the unit circle is .
In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form
In computer science, the time complexity is the computational complexity that describes the amount of time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to differ by at most a constant factor.
In mathematics, specifically in calculus and complex analysis, the logarithmic derivative of a function f is defined by the formula
In mathematics, the total derivative of a function is the best linear approximation of the value of the function with respect to its arguments. Unlike partial derivatives, the total derivative approximates the function with respect to all of its arguments, not just a single one. In many situations, this is the same as considering all partial derivatives simultaneously. The term "total derivative" is primarily used when is a function of several variables, because when is a function of a single variable, the total derivative is the same as the derivative of the function.
In mathematics, constraint counting is counting the number of constraints in order to compare it with the number of variables, parameters, etc. that are free to be determined, the idea being that in most cases the number of independent choices that can be made is the excess of the latter over the former.
Gorman polar form is a functional form for indirect utility functions in economics. Imposing this form on utility allows the researcher to treat a society of utility-maximizers as if it consisted of a single 'representative' individual. Gorman showed that having the function take Gorman polar form is both necessary and sufficient for this condition to hold.
In mathematics, a real or complex-valued function f on d-dimensional Euclidean space satisfies a Hölder condition, or is Hölder continuous, when there are nonnegative real constants C, α>0, such that
A differential equation can be homogeneous in either of two respects.
In general relativity, Gauss–Bonnet gravity, also referred to as Einstein–Gauss–Bonnet gravity, is a modification of the Einstein–Hilbert action to include the Gauss–Bonnet term