In mathematical optimization, fractional programming is a generalization of linear-fractional programming. The objective function in a fractional program is a ratio of two functions that are in general nonlinear. The ratio to be optimized often describes some kind of efficiency of a system.
Let be real-valued functions defined on a set . Let . The nonlinear program
where on , is called a fractional program.
A fractional program in which f is nonnegative and concave, g is positive and convex, and S is a convex set is called a concave fractional program. If g is affine, f does not have to be restricted in sign. The linear fractional program is a special case of a concave fractional program where all functions are affine.
The function is semistrictly quasiconcave on S. If f and g are differentiable, then q is pseudoconcave. In a linear fractional program, the objective function is pseudolinear.
By the transformation , any concave fractional program can be transformed to the equivalent parameter-free concave program [1]
If g is affine, the first constraint is changed to and the assumption that g is positive may be dropped. Also, it simplifies to .
The Lagrangian dual of the equivalent concave program is
One of the most widely used algorithms for solving concave fractional programs is Dinkelbach's method, introduced by Werner Dinkelbach in 1967. [2] It is an iterative approach that transforms the fractional objective into a sequence of simpler parametric programs.
The method defines for a parameter the auxiliary function
The optimal value of the fractional program is the unique value such that . Dinkelbach's algorithm proceeds iteratively:
The sequence converges superlinearly to the optimal ratio. [3]
{{cite book}}
: Check |isbn=
value: checksum (help)CS1 maint: multiple names: editors list (link)