Separable program

From Glossary

Jump to: navigation, search

The functions are separable:

LaTeX: 
f(x) = \sum_{j} f_j (x_j),
LaTeX: 
g(x) = \sum_{j} g_j(x_j),
LaTeX: 
\mbox{and } h(x) = \sum_{j} h_j(x_j).

The classical (LP) approaches to separable programming are called lambda-form and delta-form, both using piece-wise linear approximations.

Let LaTeX: \textstyle \{x^k\} be a specified set of points, where LaTeX: \textstyle x^k = [x(k,1), x(k,2), ..., x(k,n)], and let LaTeX: \textstyle y={y(k,j)} be decision variables that are the coefficients of convex combinations, giving the following linear program:

LaTeX: 
\max \sum_{kj} y(k, j) f_j(x(k, j)) : y \ge 0, \sum_{k} y(k, j) = 1 \mbox{ for each } j,

LaTeX: 
\sum_{kj} y(k, j) g_j(x(k, j)) \le 0, \sum_{kj} y(k, j) h_j(x(k, j)) = 0.

A restricted basis entry rule is invoked during the simplex method to yield an approximate solution. (However, this is dominated by the Generalized Lagrange Multiplier method, which can be viewed as generating the approximating breakpoints posteriori, getting successively finer near the solution.)

The delta form uses the differences: LaTeX: \textstyle u(k, j) = x(k, j) - x(k-1, j). The associated functional differences are:

LaTeX: 
\Delta f(k, j) = f_j(x(k, j)) - f_j(x(k-1, j));
LaTeX: 
\Delta g(k, j) = g_j(x(k, j)) - g_j(x(k-1, j));
LaTeX: 
\Delta h(k, j) = h_j(x(k, j)) - h_{j}(x(k-1, j)).

Then, the approximating LP is:

LaTeX: 
\max \sum_{kj} \Delta f(k, j) u(k, j)}: 0 \le u \le 1;

LaTeX: 
\sum_{kj} \Delta g(k, j) u(k, j)} \le b, \sum_{kj} \Delta h(k, j) u(k, j) = c,

where LaTeX: \textstyle b = -\sum_{j} g_j(x(0, j)) and LaTeX: \textstyle c = -\sum_{j} h_j(x(0, j)) (a similar constant was dropped from the objective). Another restricted basis rule is invoked: LaTeX: u(k,j) > 0 implies LaTeX: u(k,q)=1 for all LaTeX: q < j and all LaTeX: k.


Views
Personal tools