Generalized reduced gradient method

From Glossary

Jump to: navigation, search

The Generalized reduced gradient method (GRG) is a generalization of the reduced gradient method by allowing nonlinear constraints and arbitrary bounds on the variables. The form is:

LaTeX: 
\max f(x) : h(x) = 0, \; L \le x \le U,

where LaTeX: h has dimension LaTeX: m. The method supposes we can partition LaTeX: x = (v, w) such that:

  • LaTeX: v has dimension LaTeX: m (and LaTeX: w has dimension LaTeX: n-m);
  • the values of LaTeX: v are strictly within their bounds: LaTeX: L_v < v < U_v (this is a nondegeneracy assumption);
  • LaTeX: \nabla_v h(x) is nonsingular at LaTeX: x = (v,w).

As in the linear case, for any LaTeX: w there is a unique value, LaTeX: v(w), such that LaTeX: h(v(w),w)=0 (c.f., Implicit Function Theorem), which implies that LaTeX: dv/dw = (\nabla_v h(x))^{-1} \nabla_w h(x). The idea is to choose the direction of the independent variables to be the reduced gradient: LaTeX:  \nabla_w (f(x) - y^T h(x)), where LaTeX: y = dv/dw = (\nabla_v h(x))^{-1}\nabla_w h(x). Then, the step size is chosen and a correction procedure applied to return to the surface, LaTeX: h(x)=0.

The main steps (except the correction procedure) are the same as the reduced gradient method, changing the working set as appropriate.


Views
Personal tools