# Lagrangian relaxation

The removal of constraints and putting them into the objective as in the Lagrangian function. Specifically, suppose we have

$LaTeX: \max \{ f(x): x \in X, \, g(x) \le 0, \, h(x) = 0\}.$

The Lagrangian relaxation is

$LaTeX: \max \{f(x) - u^T g(x) - v^T h(x): x \in X\},$

where $LaTeX: u \ge 0$ ($LaTeX: v$ is unrestricted).

This is a relaxation of the original mathematical program because the constraints have been removed (expanding the feasible region), and for $LaTeX: x$ feasible (i.e., satisfying these constraints), the relaxed objective satisfies:

$LaTeX: f(x) - u^T g(x) - v^T h(x) \ge f(x),$

because $LaTeX: v^T h(x)=0$ and $LaTeX: u^Tg(x) \le 0.$

The objective is the usual Lagrangian function when $LaTeX: X$ is simply $LaTeX: \mathbb{R}^n$. It provides a foundation for Lagrangian duality and the Generalized Lagrange Multiplier Method.

Note that $LaTeX: X$ could retain some constraints, such as in the separable form $LaTeX: X = X_1 \times \ldots \times X_n$. Now suppose $LaTeX: f(x) = f_1(x_1) + \ldots + f_n(x_n),$ $LaTeX: g(x) = g_1(x_1) + \ldots + g_n(x_n),$ and $LaTeX: h(x) = h_1(x_1) + \ldots + h_n(x_n).$ Then, the Lagrangian relaxation decomposes into $LaTeX: n$ independent mathematical programs for any multiplier value:

$LaTeX: \max \{f(x) - u^T g(x) - v^T h(x): x \in X\} = \sum_{j=1}^n \max {f_j(x_j) - u_j g_j(x_j) - v_j h_j(x_j): x_j \in X_j \}.$