All Pages

From Glossary

Jump to: navigation, search

GAC

See generalized arc consistency.


GAMS

GAMS. Generalized Algebraic Modeling System.

GRASP

GRASP stands for Greedy Randomized Adaptive Search Procedures. It is a metaheuristic that uses restart to search many portions of the solution space. Once it is started, some Heuristic is employed to reach a local optimum. The choice for the local search can be problem-specific, such as using an n-opt neighborhood for the TSP, or it can use another metaheuristic, like simulated annealing or tabu search.


Game theory

In general, a (mathematical) game can be played by one player, such as a puzzle, but its main connection with mathematical programming is if there are at least two players, and they are in conflict. Each player chooses a strategy that maximizes his payoff. If there are exactly two players and one player's loss is the other's gain, the game is called zero sum. In this case, a payoff matrix, LaTeX: A, is given where LaTeX: A_{ij} is the payoff to player 1, and the loss to player 2, if player 1 uses strategy i and player 2 uses strategy j. In this representation each row of LaTeX: A corresponds to a strategy of player 1, and each column corresponds to a strategy of player 2. If LaTeX: A is LaTeX: m \times n, this means player 1 has m strategies, and player 2 has n strategies. Here is an example of a LaTeX: 2 \times 3 payoff matrix:

LaTeX: 
A = \left[ \begin{array}{rrr}
10 & -5 & 20 \\ -15 & 5 & -15
\end{array} \right]

A primary connection with mathematical programming is duality. Suppose LaTeX: (x_1, x_2, \ldots, x_m) and LaTeX: (y_1, y_2, \ldots, y_n) are strategy probability vectors chosen by the two players, i.e. in a particular play, LaTeX: x_i is the probability player 1 chooses strategy i and LaTeX: y_j is the probability player 2 chooses strategy j. Then, the value, LaTeX: v of the game is the expected payoff:

LaTeX: 
v = x A y^T = \sum_{ij} x_i A_{ij} y_j

where player 1 seeks to maximize LaTeX: v and player 2 seeks to minimize LaTeX: v. A pure strategy is a unit vector. For example, if LaTeX: x = (1, 0, 0, \ldots, 0), then player 1 is using his first strategy 100% of the time; if LaTeX: y = (0, 1, 0, 0, \ldots, 0), player 2 is using his second strategy 100% of the time. The expected payoff is simply the cell entry, LaTeX: A_{12}, which is -5 in the example (player 2 pays 5 units to player 1). The strategies that are not pure are called mixed. For example, LaTeX: x = (1/2, 1/2) means player 1 plays each of the two strategies 50% of the time (flipping a coin on each play); LaTeX: y =(1/4, 3/4, 0) means player 2 plays his first strategy 25% of the time and his second the remaining 75%. The expected payoff is -5/8.

The fundamental duality theorem of linear programming, see the supplement on mathematical programming duals,

provides the equivalent primal-dual formulation:

Player 1 is Player 2 is
Primal Dual

LaTeX:  \max v : x \ge 0, \; \sum_i x_i = 1,          LaTeX:  \min v : y \ge 0, \; \sum_j y_j = 1,
LaTeX: v \le \sum_i x_i A_{ij} \; \forall j LaTeX: v \ge \sum_j A_{ij}y_j \; \forall i

This is the simplest 2-person game; more generally, there can be n persons, and subsets of the players can form cooperative agreements. If this is not allowed, the game is called non-cooperative.

A combinatorial game is a cooperative game with restrictions on the formation of coalitions that induce a combinatorial structure, such as a matroid.

Gauge function

A nonnegative, convex function, from LaTeX: \mathbb{R}^n to the extended reals that is positively homogeneous and LaTeX: f(0)=0. A norm is a gauge function. Here is an example that is not a norm:

LaTeX: 
f(x,y,z) = \left\{ \begin{array}{rl}
</p>
<pre>          \| (x-z,\; x-y) \|_2, & 2x - y - z = 0 \\ \\
          \infty, & \mbox{ otherwise.}
          \end{array} \right.
</pre>
<p>


Gauss-Jordan elimination

A method to solve LaTeX: Ax=b that performs elementary row and column operations on LaTeX: A to annihilate successive elements of LaTeX: A in order to reduce LaTeX: A to an identity matrix. On paper, the same operations are applied to LaTeX: b, then the solution is obtained by solving the resulting identity system. In a computer, the matrices effecting the elementary operations are saved as elementary matrices, say LaTeX: E_i for the i-th operation. Then, the system is equivalent to LaTeX: E_1 E_2 \ldots E_n x = b, and forward transformation is applied to solve for LaTeX: x. This is what is done in the (revised) simplex method, and each iteration is a pivot operation.


Gauss-Seidel method

An iterative method to solve LaTeX: Ax=b using the latest values of the components of LaTeX: x during the updates. Specifically, LaTeX: A is split into LaTeX: L-T, where LaTeX: L is lower triangular, and the update solvesLaTeX:  Lx' = Tx + b. (Surprisingly, history indicates Gauss did not know about this method, and Seidel did not recommend it.) Often people refer to the method to mean the use of latest values, rather than the specific splitting. It arises, for example, in parallel computation, where using the latest value of LaTeX: x during the computation of LaTeX: x' is not done since it would not be able to take advantage of the parallelism.


Gaussian elimination

A method to solve LaTeX: Ax=b that performs elementary row operations on LaTeX: A to annihilate successive elements of LaTeX: A in order to reduce LaTeX: A to an upper triangular matrix, LaTeX: U. On paper, the same operations are applied to LaTeX: b, then the solution is obtained by solving the resulting upper triangular system. In a computer, the product of the matrices effecting the elementary row operations is a lower triangular matrix, LaTeX: L, with unit diagonal. Once this phase is completed, the system LaTeX: Ax=b becomes LaTeX: LUx=b. This is then solved in two steps: forward substitution solves LaTeX: Ly=b; then backward substitution solves LaTeX: Ux=y. (Of course, computer implementations vary.)


Generalized Lagrange multiplier method

The Generalized Lagrange Multiplier method (GLM) solves a sequence of Lagrangian optimization (relaxation) problems, searching the multiplier space by some method to seek a minimum of the maximum Lagrangian function.

  1. Given LaTeX: (u, v) with LaTeX: u \ge 0, find LaTeX: x \in \arg\!\max \{f(x) - u^T g(x) - v^T h(x) : x \in X\} = L^*(u,v).
  2. Test if 0 is in the convex hull of LaTeX: S(x, u, v) := \{(b, c): g(x) \le b, \; h(x) = c,\; u^T g(x) =u^T b\}. If so, stop (LaTeX: x solves the original mathematical program if 0 is in LaTeX: S(x, u, v); otherwise, search alternative optima; if no maximum generates 0 as a member of LaTeX: S, then the right-hand side (0) is in a duality gap).
  3. Given LaTeX: (x, u, v), choose LaTeX: (u', v') to reduce the value of LaTeX: L^*(u, v).

This can be viewed from several vantages:


Generalized arc consistency

An extension of arc consistency to constraints with more than two variables in their scope.

A variable LaTeX: x is generalized arc consistent (GAC) with a constraint if every value of the variable can be extended to all the other variables of the constraint in such a way the constraint is satisfied.

Generalized arc consistency is one of the most commonly enforced forms of consistency in constraint programming.

GAC has also been called: hyper-arc consistency and domain consistency.


Generalized equation

This has the form: 0 is in LaTeX: \{f(x)\} + T(x), where LaTeX: f is a function from LaTeX: \mathbb{R}^n into LaTeX: \mathbb{R}^n and LaTeX: T is a point to set map that maps LaTeX: x into subsets of LaTeX: \mathbb{R}^n. If LaTeX: T is absent, the condition reduces to an ordinary equation, LaTeX: f(x)=0. Usually, LaTeX: T is assumed to satisfy the monotonicity condition:

LaTeX: 
(x-x')^T(y-y') \ge 0 \mbox{ for } y \in T(x) \mbox{ and } y' \in T(x').

This includes a variational inequality (and hence the complementarity problem) as a special case.


Generalized inverse

Suppose LaTeX: A is any LaTeX: m \times n matrix. LaTeX: A^+ is a generalized inverse of LaTeX: A if LaTeX: A^+ is LaTeX: n \times m and LaTeX: AA^+A = A. Then, a fundamental theorem for linear equations is:

The equation, LaTeX: Ax = b, has a solution if and only if LaTeX: AA^+b = b for any generalized inverse, LaTeX: A^+, in which case the solutions are of the form:
LaTeX: x = A^+b + (I - A^+A)y   for any   LaTeX: y \in \mathbb{R}^n.

Here is an Example.

The Moore-Penrose class additionally requires that LaTeX: AA^+ and LaTeX: A^+A be symmetric (hermitian, if LaTeX: A is complex). In particular, if LaTeX: \mbox{rank}(A) = m, LaTeX: A^+ = A^T (AA^T)^{-1} is a Moore-Penrose inverse. Note that LaTeX: AA^+ = I, and LaTeX: A^+A is a projection matrix for the subspace LaTeX: \{x: x = A^T y \mbox{ for some } y \in \mathbb{R}^m\}, since LaTeX: x = A^T y implies LaTeX: A^+Ax = A^T (AA^T)^{-1} AA^T y = A^T y = x. Further, LaTeX: I-A^+A projects into its orthogonal complement, which is the null space of LaTeX: A, i.e. LaTeX: A(I-A^+A)x = (A-A)x = 0 for any LaTeX: x \in \mathbb{R}^n.


Generalized network

A network in which the flow that reaches the destination could be different from the flow that left the source. In the incidence matrix, instead of

LaTeX: 
\left[ \begin{array}{crc}
</p>
<pre>  \ldots & \vdots & \ldots \\
  \ldots & -1 & \ldots \\
  \ldots & \vdots & \ldots \\
  \ldots & +1 & \ldots \\
  \ldots & \vdots & \ldots
</pre>
<p>\end{array} \right]
\begin{array}{l}
\leftarrow \mbox{ source row } (i) \\ \\ \\
\leftarrow \mbox{ destination row } (j) 
\end{array}

we have

LaTeX: 
\left[ \begin{array}{crc}
</p>
<pre>  \ldots & \vdots & \ldots \\
  \ldots & -1 & \ldots \\
  \ldots & \vdots & \ldots \\
  \ldots & g & \ldots \\
  \ldots & \vdots & \ldots
</pre>
<p>\end{array} \right]
\begin{array}{l}
\leftarrow \mbox{ source row } (i) \\ \\ \\
\leftarrow \mbox{ destination row } (j) 
\end{array}

where LaTeX: g > 0. If LaTeX: g < 1, there is a loss; if LaTeX: g > 1, there is a gain.


Generalized reduced gradient method

The Generalized reduced gradient method (GRG) is a generalization of the reduced gradient method by allowing nonlinear constraints and arbitrary bounds on the variables. The form is:

LaTeX: 
\max f(x) : h(x) = 0, \; L \le x \le U,

where LaTeX: h has dimension LaTeX: m. The method supposes we can partition LaTeX: x = (v, w) such that:

  • LaTeX: v has dimension LaTeX: m (and LaTeX: w has dimension LaTeX: n-m);
  • the values of LaTeX: v are strictly within their bounds: LaTeX: L_v < v < U_v (this is a nondegeneracy assumption);
  • LaTeX: \nabla_v h(x) is nonsingular at LaTeX: x = (v,w).

As in the linear case, for any LaTeX: w there is a unique value, LaTeX: v(w), such that LaTeX: h(v(w),w)=0 (c.f., Implicit Function Theorem), which implies that LaTeX: dv/dw = (\nabla_v h(x))^{-1} \nabla_w h(x). The idea is to choose the direction of the independent variables to be the reduced gradient: LaTeX:  \nabla_w (f(x) - y^T h(x)), where LaTeX: y = dv/dw = (\nabla_v h(x))^{-1}\nabla_w h(x). Then, the step size is chosen and a correction procedure applied to return to the surface, LaTeX: h(x)=0.

The main steps (except the correction procedure) are the same as the reduced gradient method, changing the working set as appropriate.


Generalized upper bound

A Generalized upper bound (GUB) is an upper bound constraint on a sum of variables: LaTeX: \textstyle\sum_{j \in J} x_j \le U. A collection of these that do not overlap (i.e., index sets LaTeX: J are disjoint) comprise the foundation for the generalized upper bounding technique in linear programming. This is where the summation equation is marked in the data structure, rather than represented explicitly as a row in the LP, and the basis is handled in its reduced dimension to gain computational efficiency in both time and space.


Genetic algorithm

Genetic algorithms (GAs) are a class of algorithms inspired by the mechanisms of genetics, which has been applied to global optimization (especially for combinatorial programs). It requires the specification of three operations (each is typically probabilistic) on objects, called "strings" (these could be real-valued vectors):

  1. Reproduction - combining strings in the population to create a new string (offspring);

    Example: Taking 1st character from 1st parent + rest of string from 2nd parent:

    LaTeX: 
[001001] + [111111] \rightarrow [011111]
  2. Mutation - spontaneous alteration of characters in a string;

    Example: Just the left-most string:

    LaTeX: 
[001001] \rightarrow [101001]

  3. Crossover - combining strings to exchange values, creating new strings in their place.

    Example: With crossover location at 2:

    LaTeX: 
[001001]  &amp;  [111111] ===&gt; [001111], [111001]
These can combine to form hybrid operators, and the reproduction and crossover operations can include competition within populations. Here is a generic GA (strategy):

0. Initialize population.

1. Select parents for reproduction and GA operators (viz., mutation and crossover).
2. Perform operations to generate intermediate population and evaluate their fitness values.
3. Select members of population to remain with new generation.

Repeat 1-3 until some stopping rule is reached.

The original GA (1973, by John Holland) used crossover and total population replacement. This means a population with 2N objects (called chromosomes) form N pairings of parents that produce 2N offsprings. The offsprings comprise the new generation, and they become the total population, replacing their parents. More generally, a population of size N produces an intermediate population of N+M, from which Ñ is kept to form the new population. One way to choose which Ñ survive is by those with the greatest fitness values – survival of the fittest.

Geometric convergence

See convergence.


Geometric mean

Given a nonnegative LaTeX:  x \in \mathbb{R}^n and a weight vector, LaTeX: a in the simplex, LaTeX: S_n, the geometric mean is the value:

LaTeX: 
x(1)^a(1) * x(2)^a(2) * ... * x(n)^a(n).

For example, for LaTeX: a=(1/2, 1/2), the geometric mean of LaTeX: x = (1, 9) is LaTeX: 1*3=3. A fundamental inequality that provides a foundation for geometric programming is that the geometric mean is never greater than the arithmetic mean:

LaTeX: 
\Pi_j x(j)^a(j) \le \textstyle\sum_j a(j)x(j).

for all nonnegative LaTeX: x \in \mathbb{R}^n and LaTeX: a \in S_n. Further, equality holds only if LaTeX: x(j) is constant for all LaTeX: j.

Geometric program

A mathematical program of the form

LaTeX: 
\min P_0(x) : P_k(x) \le 1, \;  x > 0,

where each LaTeX: P_k is a posynomial (k=0,...,m). Conventionally, the monomials are indexed sequentially, so that the posynomials appear as:

LaTeX: P_{0} = \sum_{i=1}^{N_0} c_{i} \prod_{j} x_{j}^{a_{i,j}}
LaTeX: P_{1} = \sum_{i=N_0 + 1}^{N_1} c_{i} \prod_{j} x_j^{a_{i,j}}
LaTeX: \vdots
LaTeX: P_m = \sum_{i=N_{m-1} + 1}^{N_m} c_i \prod_j x_j^{a_{i,j}}


For example, consider:

LaTeX: 
\min 1/y^3 + 5y/z^2:  x, y, z > 0, \; 1/(xyz) \le 1, \; x/y + z/x \le 1.

Then, LaTeX: n=3 (variables), LaTeX: m=2 (constraints), LaTeX: N_0=2 (monomials in objective), LaTeX: N_1=3 (LaTeX: N_0 + 1 monomial in constraint 1), and LaTeX: N_2 = 5 (LaTeX: N_1 + 2 monomials in constraint 2).

The exponent matrix is the LaTeX: N_m \times n matrix whose i-th row contains the exponents of the variables in the i-th monomial. The example has 5 rows and 3 columns:

LaTeX: 
A = \left[ \begin{array}{rrr}
</p>
<pre>  0 &  0 & -3 \\
  0 &  1 & -2 \\
 -1 & -1 & -1 \\
  1 & -1 &  0 \\
 -1 &  0 &  1
  \end{array} \right]
</pre>
<p>

The degree of difficulty is the number of terms in all posynomials (LaTeX: N_m) minus the number of independent variables (one less than the rank of exponent matrix). In the above example, the rank of LaTeX: A is 3, so the degree of difficulty is 5-3-1 = 1. If the last constraint were not present, only the first three rows comprise the exponent matrix, and its rank is 2. In that case, the degree of difficulty is 0. (Using the <a href="second.php?page=duals.html#Geometric">geometric dual</a>, the solution reduces to solving a system of linear equations, which is what motivates the terminology.)

Also see superconsistent.

Since its inception, the posynomial form has been extended to signomials. In that case, the duality need not be strong, as there can be a duality gap. The general, or signomial, geometric program is of the form:

LaTeX: 
\min P_0(x) - Q_0(x): x > 0, \; P_k(x) - Q_k(x) \le 1

for LaTeX: i=1,\ldots,m, where LaTeX: P_k and LaTeX: Q_k are posynomials (LaTeX: k=0,\dots,m).


Global Cardinality constraint

This is a global constraint that is specified on LaTeX: n assignment variables LaTeX: \langle x_1 \dots x_n \rangle and LaTeX: n^{'} count variables LaTeX: \langle c_{v1} \dots c_{vn^{'}} \rangle, and enforces that each value LaTeX: v_i is assigned to exactly LaTeX: c_{vi} of the assignment variables, LaTeX: x_j. The global cardinality constraint LaTeX: gcc(x_1\dots x_n), c_{v1} \dots c_{vn^{'}}) is a generalization of the all different constraint which requires that every value is assigned to at most one variable.


Global constraint

A global constraint LaTeX: C={C(i)} is a family of scope-parameterized constraints (normally LaTeX:  i \geq 2 ), where LaTeX: i is the size of the scope of the constraint. The relation represented by the constraint is often defined implicitly (i.e., intensionally) via a compact set of characteristics that all satisfying tuples must exhibit. For example, the all-different constraint is satisfied by all assignments to its variables where no pair have the same value. Such implicit definitions are then expressed and enforced by the constraint propagator corresponding to the global constraint.

Global constraint may also be explicitly (i.e., extensionally) defined by providing a set of satisfying (or non-satisfying) tuples.

The most common global constraint are:


Global convergence

See convergence.


Global optimization

Generally refers to mathematical programming without convexity assumptions, which are NP-hard. In general, there could be a local optimum that is not a global optimum. Some authors use this term to imply the stronger condition there are multiple local optima. Some solution strategies are given as heuristic search methods (including those that guarantee global convergence, such as branch and bound). As a process associated with algorithm design, some regard this simply as attempts to assure convergence to a global optimum (unlike a purely local optimization procedure, like steepest ascent). See the Glover's linearization.


Global optimum

A solution to a mathematical program (as distinct from a local optimum). When the mathematical program is not convex, there could be (and typically are) many local optima that are not global optima.


Glovers linearization

Given a binary variable LaTeX: x and a linear function LaTeX: f(w) in discrete and/or continuous variables LaTeX: w in LaTeX: W for some set LaTeX: W, this linearization reformulates the product LaTeX: f(w)x with a (free) continuous variable LaTeX: z and enforces that LaTeX: z = f(w)x by adding four linear inequalities:

LaTeX: 
Lx \le z \le Ux, \; f(w) - U(1-x) \le z \le f(w) - L(1-x),

where the values LaTeX: L and LaTeX: U are defined as

LaTeX: 
L = \min \{ f(w) : w \in W^R\} \mbox{ and } U = \max \{f(w) : w \in W^R\},

and LaTeX: W^R is any relaxation of LaTeX: W.


Goal program

A goal is neither a constraint nor an objective because it is neither a firm requirement nor a function that must be at an extreme value. In English, a goal represents a relation that is desired, but it may be violated if there is adequate compensation. For example, we may have a budget of $1000, and we prefer to operate within it, but if spending just one more dollar yields a major improvement in operations, we would consider spending $1001.

A goal program is put into a standard form mathematical program in two steps:

  1. Add the variable, LaTeX: v, and the constraints, LaTeX: G(x) - v \le 0 and LaTeX: v \ge 0, to measure the level of violation. (If the goal were LaTeX: H(x) = 0, instead of LaTeX: G(x) \le 0, the added constraint would be LaTeX: H(x) + u - v = 0, where LaTeX: u and LaTeX: v are (non-negative) levels of violation below and above the goal, respectively.)
  2. Add the penalty term to the objective: LaTeX: P(v), where LaTeX: P(0)=0, and LaTeX: P is strictly increasing -- i.e., LaTeX: v' \ge v and LaTeX: v'_i > v_i for some LaTeX: i imply LaTeX: P(v') > P(v).

The resulting mathematical program represents the goal program. If the goal is satisfied, LaTeX: v=0; otherwise, the penalty term, LaTeX: P(v), reflects "adequate compensation" for the violation.


Golden mean

The golden mean, or golden ratio is the positive solution to the quadratic equation, LaTeX: x^2 + x - 1 = 0, which is LaTeX: (\sqrt{5}-1)/2, or approximately LaTeX: 0.618. This has the proportionality property: LaTeX:  x : 1 = (1-x) : x, which defines the placements of evaluations for the golden section search.


Golden section search

This finds an interval, LaTeX: [a,b], that contains a maximum of a unimodal function whose domain is LaTeX: [0,1], such that the length of the final interval, LaTeX: [a,b], satisfies: LaTeX: b-a < e (where LaTeX: e > 0 is specified). The golden ratio is denoted below by LaTeX: g, which is approximately 0.618.

  1. Initially compute LaTeX: f(g) and LaTeX: f(1-g), and set the interval of uncertainty to the original endpoints, LaTeX: a=0 and LaTeX: b=1. In general, we have LaTeX: x and LaTeX: y placed in LaTeX: [a,b], such that the distances from the endpoints are the same:
                     g(b-a)
                |----------------|
                a------x---------y------b
                       |----------------|
                             g(b-a)
                 
    
  2. Given the interval LaTeX: [a, b] contains evaluations LaTeX: f(x) and LaTeX: f(y), with LaTeX: x < y, compare: if LaTeX: f(x) > f(y), replace LaTeX: b with LaTeX: y; if LaTeX: f(x) < f(y), replace LaTeX: a with LaTeX: x. (If LaTeX: f(x)=f(y), this leaves the interval LaTeX: [x,y], in which case evaluate LaTeX: f(x+g(y-x)).) One of the following cases prevail:

                  *                                          *
                            *                     *
           a------x---------y------b       a------x---------y------b
                            : drop :       : drop :
                f(x) > f(y)                           f(x) < f(y)
                  x=g(y-a)                              y=g(b-x)
    
                                 *         *
                          a------x---------y------b
                          : drop :         : drop :
                                 f(x) = f(y)
                 
    

The golden ratio is the limit of successive fibonacci numbers: (LaTeX: F_{N-1} / F_{N}) \rightarrow g. Thus, the golden section search approaches Fibonacci search as the number of functional evaluations (N) becomes large.


Gomory cut

This is an early cutting plane for an integer program. Consider LaTeX: \{ (x, s) : x \in \mathbb{Z}^n, \; s \in \mathbb{R}^m, Ax + s = b, x \ge 0, s \ge 0 \}, where LaTeX: A and LaTeX: b have integer values. For any LaTeX: y \in \mathbb{R}^m, let LaTeX: a = A^T y - \lfloor A^T y \rfloor and LaTeX: a_0 = y^T b - \lfloor y^T b \rfloor (i.e., LaTeX: a is the fractional part of the linear combination of the equations, and LaTeX: a_0 is the fractional part of the same linear combination of the right-hand sides). Also, let LaTeX: c = y - \lfloor y \rfloor (fractional part of LaTeX: y). Then, Gomory's cut is:

LaTeX: 
a^T x + c^T s \ge a_0.

The vector LaTeX: y is chosen such that LaTeX: a_0 > 0 and the current solution (with LaTeX: x = 0) violates this inequality in the LP relaxation.


Gomory function

A recursively defined class of functions on LaTeX: \mathbb{Q}^n to LaTeX: \mathbb{Q} (rational n-vectors to rationals) using three operations: (1) non-negative, rational linear combinations, (2) integer rounding, and (3) maximum. Letting LaTeX: R(v) be the integer round-up of LaTeX: v, i.e. the least integer not less than LaTeX: v, we say LaTeX: f is a Gomory function if it is the identity function, LaTeX: f(x)=x, or if it has one of the following forms:

  1. LaTeX: f(x) = ag(x) + bh(x) for some LaTeX: a, b \ge 0 and rational;
  2. LaTeX: f(x) = R(g(x));
  3. fLaTeX: (x) = \max \{g(x), h(x)};
where LaTeX: g and LaTeX: h are Gomory functions. It can be shown this is equivalent to the point-wise maximum of finitely many Chvátal functions:

LaTeX: 
f(x) = \max{C_1(x), C_2(x), \ldots,C_k(x)\},

where each LaTeX: C_i is a Chvátal function.

This arises in integer linear programming in several ways. One fundamental result is that the optimal value as a function of LaTeX: b is a Gomory function:

LaTeX: 
f^*(b) = \inf\{c^Tx: Ax = b, x \ge 0, \; x \in \mathbb{Z}^n \}.

Here is an example:

LaTeX: 
\begin{array}{rcl}
f^*(b_1, b_2) & = & \min \{ 7x_1 + 4x_2 + 3x_3 : \\
</p>
<pre>             &   & \;\;\;\;\;\;\;\; 5x_1 + 3x_2 + 2x_3 = b_1 \\
             &   & \;\;\;\;\;\;\;\; x_1 + x_2 + x_3 = b_2 \\
             &   & \;\;\;\;\;\;\;\; x \in \mathbb{Z}^3, \; x \ge 0 \} \\
             & = & \max \{ b_1 + b_2, \; \lceil b_1/2 - b_2 / 2 \rceil \}
</pre>
<p>\end{array}

Note: The above definition assumes minimization; if the ILP is maximization, LaTeX: R(.) would be round-down (i.e., greatest integer not exceeding the value), and condition 3 would be the point-wise minimum.


Gomory group

The group associated with the corner polyhedron problem:

LaTeX: 
\left\{v \in \mathbb{Z}^n :  \sum_j r(A_j)v_j = r(b),\; v \ge 0 \right\},

where LaTeX: v corresponds to the nonbasic variables and LaTeX: r(d) is the remainder, LaTeX: d\!\mod |B|, where LaTeX: |B| is the determinate of the basis, LaTeX: B. The idea is that the nonbasic value chosen in this group, together with the basic levels imputed by the equations LaTeX: Ax=b, yields an integer solution (but a basic level might be negative).


Gradient

Vector of first partial derivatives of a function (assumed to be differentiable at least once). Typically denoted LaTeX: \nabla f(x), where LaTeX: f is the function and LaTeX: x is a point in its domain.


Gradient projection method

A feasible direction method by projecting the gradient into the working surface, LaTeX: \{x: Ax=b\}. Suppose LaTeX: A has full row rank. Then, LaTeX: P = I - A^T(AA^T)^{-1}A projects any vector into the null space of LaTeX: A: LaTeX: APy = (A-A)y = 0 for all LaTeX: y \in \mathbb{R}^n. The form of an iteration is LaTeX: x' = x + sd, where LaTeX: d is the projected gradient, LaTeX: P \nabla f(x), and LaTeX: s is determined by line search. Since LaTeX: Ad=0, LaTeX: Ax'=Ax, thus staying in the working surface. (This extends to nonlinear constraints by using the same correction procedure as the generalized reduced gradient method.)


Graph

In one context, this is the functional value and domain: LaTeX: \{(x, z): x \in X, \; z=f(x)\}, where LaTeX: f:X \rightarrow R. In another context, this is a (maybe undirected) network. In the former context, see also epigraph and hypograph. In the latter context, the notation LaTeX: (V,E) is sometimes used to mean a graph with vertex (or node) set LaTeX: V and edge (or link) set LaTeX: E. We say an edge is incident with the two nodes that define it, and those two nodes are called the endpoints of the edge. The endpoints are said to be adjacent nodes.

An edge whose endpoints are the same node is called a loop. Two edges with the same endpoints are called parallel. A graph is simple if it has no loops or parallel edges. Otherwise, it is called a multigraph. The degree of a node is the number of edges incident to it (counting a loop twice). If two edges have a common endpoint, they are said to be adjacent. A path is a sequence of edges that are successively adjacent. (If the graph is simple, a path can also be represented by the associated sequence of nodes.)

When the edge is directed (or oriented), it is called an arc. When all edges are directed, the graph is called a directed graph, or digraph. With additional properties (e.g., arc weights), this is a network. A path in a digraph can traverse its arcs in either the forward or backward direction. If all arcs are forward, it is a directed path, or dipath.

Here are some special graphs of interest (not exhaustive).

  • bipartite. nodes decompose into 2 sets such that every link has its endpoints in opposite sets. This applies, for example, to the standard transportation problem, where there are sources in one set and destinations in the other.
  • complete. There is a link between each pair of nodes. A complete graph on n nodes is usually denoted LaTeX: K_n. In the case of a bipartite graph, the term bicomplete is used to mean all possible links are there: between all pairs of nodes in the two sets. A bicomplete graph on sets with LaTeX: m and LaTeX: n nodes, resp., is usually denoted LaTeX: K_{mn}.
  • connected. There is a path between each pair of nodes. If the graph is directed, this divides into:
    1. strongly connected - arcs must be traversed in forward direction;
    2. weakly connected - arcs can be traversed in either direction.
  • cycle (denoted LaTeX: C_n for LaTeX: n nodes). The links are LaTeX: (1,2),(2,3), \ldots, (n-1,n), (n,1).
  • planar. The graph can be drawn in a plane (2D) such that no two links cross (they meet only at incident nodes). This has special properties in multi-commodity flows and shortest paths.
  • subgraph. LaTeX: (V',E') such that LaTeX: V' is a subset of LaTeX: V and LaTeX: E' is a subset of LaTeX: E.
  • tree. Connected and acyclic. One problem is finding a spanning tree that is optimal in some sense.
  • undirected. No link is directed.


Greedy algorithm

Applies when the optimization problem is to decide whether or not to include some element from a given set. A greedy algorithm begins with no elements and sequentially selects an element from the feasible set of remaining elements by myopic optimization. (The elements could have been sorted by some criterion, such as associated weights.) This results in an optimal solution to the problem if and only if there is an underlying matroid structure (for example, see spanning tree). Further details are in the supplement, Greedy Algorithms.


Groebner basis

This arises in parametric integer (linear) programming, varying the right-hand side LaTeX: (b). A precise definition is involved, stemming from ideal theory in abstract algebra, and there are many other applications outside of integer programming. In the context of ILP, a Gröbner basis is a minimal set of change directions, LaTeX: \{h^1, h^2, \ldots, h^k\}, called a test set, such that the objective function improves along those directions. To elaborate, suppose we have

LaTeX: 
\mbox{ILP}(b): \min c^Tx : Ax = b, \; x \in \mathbb{Z}^n, \; x \ge 0,

where LaTeX: A and LaTeX: b have integer elements. Let LaTeX: X^*(b) = \arg\!\min \{c^Tx : Ax = b, \;x \in \mathbb{Z}^n, \; x \ge 0\} and LaTeX: B = \{b: X^*(b) \neq \emptyset\} (i.e., LaTeX: \mbox{ILP}(b) is feasible and bounded). A Gröbner basis for LaTeX: A, holding LaTeX: c fixed, is a set, LaTeX: G_c = \{h^i\}, for which the following conditions hold:

  1. LaTeX: Ah^i = 0.
  2. LaTeX: c^T h^i < 0 (for minimization).
  3. For LaTeX: b \in B and LaTeX: x \not\in X^*(b), there exists LaTeX: h^i \in G_c such thatLaTeX:  x + h^i \ge 0.

Condition 1 says that LaTeX: A(x+h^i)=b if LaTeX: Ax=b, and condition 2 says LaTeX: c^T(x+h^i) < c^Tx. Therefore, if LaTeX: x is any feasible solution to LaTeX: \mbox{ILP}(b), either LaTeX: x is optimal for a particular right-hand side LaTeX: b, or condition 3 ensures there is an improvement by adding some vector, LaTeX: h^i \in G_c.

Here is a numerical example.


Group problem

Solving the corner polyhedron problem from the perspective of the Gomory group.


Views
Personal tools