All Pages
From Glossary
GAC 
See generalized arc consistency.
GAMS 
GAMS. Generalized Algebraic Modeling System.
GRASP 
GRASP stands for Greedy Randomized Adaptive Search Procedures. It is a metaheuristic that uses restart to search many portions of the solution space. Once it is started, some Heuristic is employed to reach a local optimum. The choice for the local search can be problemspecific, such as using an nopt neighborhood for the TSP, or it can use another metaheuristic, like simulated annealing or tabu search.
Game theory 
In general, a (mathematical) game can be played by one player, such as a puzzle, but its main connection with mathematical programming is if there are at least two players, and they are in conflict. Each player chooses a strategy that maximizes his payoff. If there are exactly two players and one player's loss is the other's gain, the game is called zero sum. In this case, a payoff matrix, , is given where is the payoff to player 1, and the loss to player 2, if player 1 uses strategy i and player 2 uses strategy j. In this representation each row of corresponds to a strategy of player 1, and each column corresponds to a strategy of player 2. If is , this means player 1 has m strategies, and player 2 has n strategies. Here is an example of a payoff matrix:
A primary connection with mathematical programming is duality. Suppose and are strategy probability vectors chosen by the two players, i.e. in a particular play, is the probability player 1 chooses strategy i and is the probability player 2 chooses strategy j. Then, the value, of the game is the expected payoff:
where player 1 seeks to maximize and player 2 seeks to minimize . A pure strategy is a unit vector. For example, if , then player 1 is using his first strategy 100% of the time; if , player 2 is using his second strategy 100% of the time. The expected payoff is simply the cell entry, , which is 5 in the example (player 2 pays 5 units to player 1). The strategies that are not pure are called mixed. For example, means player 1 plays each of the two strategies 50% of the time (flipping a coin on each play); means player 2 plays his first strategy 25% of the time and his second the remaining 75%. The expected payoff is 5/8.
The fundamental duality theorem of linear programming, see the supplement on mathematical programming duals,
provides the equivalent primaldual formulation:Player 1 is  Player 2 is  
Primal  Dual  




This is the simplest 2person game; more generally, there can be n persons, and subsets of the players can form cooperative agreements. If this is not allowed, the game is called noncooperative.
A combinatorial game is a cooperative game with restrictions on the formation of coalitions that induce a combinatorial structure, such as a matroid.
Gauge function 
A nonnegative, convex function, from to the extended reals that is positively homogeneous and . A norm is a gauge function. Here is an example that is not a norm:
GaussJordan elimination 
A method to solve that performs elementary row and column operations on to annihilate successive elements of in order to reduce to an identity matrix. On paper, the same operations are applied to , then the solution is obtained by solving the resulting identity system. In a computer, the matrices effecting the elementary operations are saved as elementary matrices, say for the ith operation. Then, the system is equivalent to , and forward transformation is applied to solve for . This is what is done in the (revised) simplex method, and each iteration is a pivot operation.
GaussSeidel method 
An iterative method to solve using the latest values of the components of during the updates. Specifically, is split into , where is lower triangular, and the update solves. (Surprisingly, history indicates Gauss did not know about this method, and Seidel did not recommend it.) Often people refer to the method to mean the use of latest values, rather than the specific splitting. It arises, for example, in parallel computation, where using the latest value of during the computation of is not done since it would not be able to take advantage of the parallelism.
Gaussian elimination 
A method to solve that performs elementary row operations on to annihilate successive elements of in order to reduce to an upper triangular matrix, . On paper, the same operations are applied to , then the solution is obtained by solving the resulting upper triangular system. In a computer, the product of the matrices effecting the elementary row operations is a lower triangular matrix, , with unit diagonal. Once this phase is completed, the system becomes . This is then solved in two steps: forward substitution solves ; then backward substitution solves . (Of course, computer implementations vary.)
Generalized Lagrange multiplier method 
The Generalized Lagrange Multiplier method (GLM) solves a sequence of Lagrangian optimization (relaxation) problems, searching the multiplier space by some method to seek a minimum of the maximum Lagrangian function.
 Given with , find .
 Test if 0 is in the convex hull of . If so, stop ( solves the original mathematical program if 0 is in ; otherwise, search alternative optima; if no maximum generates 0 as a member of , then the righthand side (0) is in a duality gap).
 Given , choose to reduce the value of .
 Column generation – maximizing the Lagrangian generates a column in the primal of the randomized program.
 Cutting plane – maximizing the Lagrangian generates a cutting plane in the dual of the randomized program.
 DantzigWolfe decomposition – the master problem is the randomized program; the subproblem is maximizing the Lagrangian.
 Lagrangian duality – GLM solves the Lagrangian dual; (a duality gap occurs when there is no optimal pure strategy in the randomized program).
 Randomized program – This is what GLM solves, and a mixed strategy could have meaning in an application.
Generalized arc consistency 
An extension of arc consistency to constraints with more than two variables in their scope.
A variable is generalized arc consistent (GAC) with a constraint if every value of the variable can be extended to all the other variables of the constraint in such a way the constraint is satisfied.
Generalized arc consistency is one of the most commonly enforced forms of consistency in constraint programming.
GAC has also been called: hyperarc consistency and domain consistency.
Generalized equation 
This has the form: 0 is in , where is a function from into and is a point to set map that maps into subsets of . If is absent, the condition reduces to an ordinary equation, . Usually, is assumed to satisfy the monotonicity condition:
This includes a variational inequality (and hence the complementarity problem) as a special case.
Generalized inverse 
Suppose is any matrix. is a generalized inverse of if is and . Then, a fundamental theorem for linear equations is:
The equation, , has a solution if and only if for any generalized inverse, , in which case the solutions are of the form:for any
Here is an Example.
The MoorePenrose class additionally requires that and be symmetric (hermitian, if is complex). In particular, if , is a MoorePenrose inverse. Note that , and is a projection matrix for the subspace , since implies . Further, projects into its orthogonal complement, which is the null space of , i.e. for any .
Generalized network 
A network in which the flow that reaches the destination could be different from the flow that left the source. In the incidence matrix, instead of
we have
where . If , there is a loss; if , there is a gain.
Generalized reduced gradient method 
The Generalized reduced gradient method (GRG) is a generalization of the reduced gradient method by allowing nonlinear constraints and arbitrary bounds on the variables. The form is:
where has dimension . The method supposes we can partition such that:
 has dimension (and has dimension );
 the values of are strictly within their bounds: (this is a nondegeneracy assumption);
 is nonsingular at .
As in the linear case, for any there is a unique value, , such that (c.f., Implicit Function Theorem), which implies that . The idea is to choose the direction of the independent variables to be the reduced gradient: , where . Then, the step size is chosen and a correction procedure applied to return to the surface, .
The main steps (except the correction procedure) are the same as the reduced gradient method, changing the working set as appropriate.
Generalized upper bound 
A Generalized upper bound (GUB) is an upper bound constraint on a sum of variables: . A collection of these that do not overlap (i.e., index sets are disjoint) comprise the foundation for the generalized upper bounding technique in linear programming. This is where the summation equation is marked in the data structure, rather than represented explicitly as a row in the LP, and the basis is handled in its reduced dimension to gain computational efficiency in both time and space.
Genetic algorithm 
Genetic algorithms (GAs) are a class of algorithms inspired by the mechanisms of genetics, which has been applied to global optimization (especially for combinatorial programs). It requires the specification of three operations (each is typically probabilistic) on objects, called "strings" (these could be realvalued vectors):
 Reproduction  combining strings in the population to
create a new string (offspring);
Example: Taking 1st character from 1st parent + rest of string from 2nd parent:
 Mutation  spontaneous alteration of characters in a
string;
Example: Just the leftmost string:
 Crossover  combining strings to exchange values,
creating new strings in their place.
Example: With crossover location at 2:
0. Initialize population.
1. Select parents for reproduction and GA operators (viz., mutation and crossover).
2. Perform operations to generate intermediate population and evaluate their fitness values.
3. Select members of population to remain with new generation.Repeat 13 until some stopping rule is reached.
The original GA (1973, by John Holland) used crossover and total population replacement. This means a population with 2N objects (called chromosomes) form N pairings of parents that produce 2N offsprings. The offsprings comprise the new generation, and they become the total population, replacing their parents. More generally, a population of size N produces an intermediate population of N+M, from which Ñ is kept to form the new population. One way to choose which Ñ survive is by those with the greatest fitness values – survival of the fittest.
Geometric convergence 
See convergence.
Geometric mean 
Given a nonnegative and a weight vector, in the simplex, , the geometric mean is the value:
For example, for , the geometric mean of is . A fundamental inequality that provides a foundation for geometric programming is that the geometric mean is never greater than the arithmetic mean:
for all nonnegative and . Further, equality holds only if is constant for all .
Geometric program 
A mathematical program of the form
where each is a posynomial (k=0,...,m). Conventionally, the monomials are indexed sequentially, so that the posynomials appear as:
For example, consider:
Then, (variables), (constraints), (monomials in objective), ( monomial in constraint 1), and ( monomials in constraint 2).
The exponent matrix is the matrix whose ith row contains the exponents of the variables in the ith monomial. The example has 5 rows and 3 columns:
The degree of difficulty is the number of terms in all posynomials () minus the number of independent variables (one less than the rank of exponent matrix). In the above example, the rank of is 3, so the degree of difficulty is 531 = 1. If the last constraint were not present, only the first three rows comprise the exponent matrix, and its rank is 2. In that case, the degree of difficulty is 0. (Using the <a href="second.php?page=duals.html#Geometric">geometric dual</a>, the solution reduces to solving a system of linear equations, which is what motivates the terminology.)
Also see superconsistent.
Since its inception, the posynomial form has been extended to signomials. In that case, the duality need not be strong, as there can be a duality gap. The general, or signomial, geometric program is of the form:
for , where and are posynomials ().
Global Cardinality constraint 
This is a global constraint that is specified on assignment variables and count variables , and enforces that each value is assigned to exactly of the assignment variables, . The global cardinality constraint is a generalization of the all different constraint which requires that every value is assigned to at most one variable.
Global constraint 
A global constraint is a family of scopeparameterized constraints (normally ), where is the size of the scope of the constraint. The relation represented by the constraint is often defined implicitly (i.e., intensionally) via a compact set of characteristics that all satisfying tuples must exhibit. For example, the alldifferent constraint is satisfied by all assignments to its variables where no pair have the same value. Such implicit definitions are then expressed and enforced by the constraint propagator corresponding to the global constraint.
Global constraint may also be explicitly (i.e., extensionally) defined by providing a set of satisfying (or nonsatisfying) tuples.
The most common global constraint are:
Global convergence 
See convergence.
Global optimization 
Generally refers to mathematical programming without convexity assumptions, which are NPhard. In general, there could be a local optimum that is not a global optimum. Some authors use this term to imply the stronger condition there are multiple local optima. Some solution strategies are given as heuristic search methods (including those that guarantee global convergence, such as branch and bound). As a process associated with algorithm design, some regard this simply as attempts to assure convergence to a global optimum (unlike a purely local optimization procedure, like steepest ascent). See the Glover's linearization.
Global optimum 
A solution to a mathematical program (as distinct from a local optimum). When the mathematical program is not convex, there could be (and typically are) many local optima that are not global optima.
Glovers linearization 
Given a binary variable and a linear function in discrete and/or continuous variables in for some set , this linearization reformulates the product with a (free) continuous variable and enforces that by adding four linear inequalities:
where the values and are defined as
and is any relaxation of .
Goal program 
A goal is neither a constraint nor an objective because it is neither a firm requirement nor a function that must be at an extreme value. In English, a goal represents a relation that is desired, but it may be violated if there is adequate compensation. For example, we may have a budget of $1000, and we prefer to operate within it, but if spending just one more dollar yields a major improvement in operations, we would consider spending $1001.
A goal program is put into a standard form mathematical program in two steps:
 Add the variable, , and the constraints, and , to measure the level of violation. (If the goal were , instead of , the added constraint would be , where and are (nonnegative) levels of violation below and above the goal, respectively.)
 Add the penalty term to the objective: , where , and is strictly increasing  i.e., and for some imply .
The resulting mathematical program represents the goal program. If the goal is satisfied, ; otherwise, the penalty term, , reflects "adequate compensation" for the violation.
Golden mean 
The golden mean, or golden ratio is the positive solution to the quadratic equation, , which is , or approximately . This has the proportionality property: , which defines the placements of evaluations for the golden section search.
Golden section search 
This finds an interval, , that contains a maximum of a unimodal function whose domain is , such that the length of the final interval, , satisfies: (where is specified). The golden ratio is denoted below by , which is approximately 0.618.
 Initially compute and , and set
the interval of uncertainty to the original endpoints,
and . In general, we
have and placed in , such
that the distances from the endpoints are the same:
g(ba)  axyb  g(ba)
 Given the interval contains evaluations
and ,
with , compare: if , replace
with ; if
, replace with .
(If , this leaves the
interval , in which case evaluate
.)
One of the following cases prevail:
* * * * axyb axyb : drop : : drop : f(x) > f(y) f(x) < f(y) x=g(ya) y=g(bx) * * axyb : drop : : drop : f(x) = f(y)
The golden ratio is the limit of successive fibonacci numbers: (. Thus, the golden section search approaches Fibonacci search as the number of functional evaluations (N) becomes large.
Gomory cut 
This is an early cutting plane for an integer program. Consider , where and have integer values. For any , let and (i.e., is the fractional part of the linear combination of the equations, and is the fractional part of the same linear combination of the righthand sides). Also, let (fractional part of ). Then, Gomory's cut is:
The vector is chosen such that and the current solution (with ) violates this inequality in the LP relaxation.
Gomory function 
A recursively defined class of functions on to (rational nvectors to rationals) using three operations: (1) nonnegative, rational linear combinations, (2) integer rounding, and (3) maximum. Letting be the integer roundup of , i.e. the least integer not less than , we say is a Gomory function if it is the identity function, , or if it has one of the following forms:
 for some and rational;
 ;
 f;
where each is a Chvátal function.
This arises in integer linear programming in several ways. One fundamental result is that the optimal value as a function of is a Gomory function:
Here is an example:
Note: The above definition assumes minimization; if the ILP is maximization, would be rounddown (i.e., greatest integer not exceeding the value), and condition 3 would be the pointwise minimum.
Gomory group 
The group associated with the corner polyhedron problem:
where corresponds to the nonbasic variables and is the remainder, , where is the determinate of the basis, . The idea is that the nonbasic value chosen in this group, together with the basic levels imputed by the equations , yields an integer solution (but a basic level might be negative).
Gradient 
Vector of first partial derivatives of a function (assumed to be differentiable at least once). Typically denoted , where is the function and is a point in its domain.
Gradient projection method 
A feasible direction method by projecting the gradient into the working surface, . Suppose has full row rank. Then, projects any vector into the null space of : for all . The form of an iteration is , where is the projected gradient, , and is determined by line search. Since , , thus staying in the working surface. (This extends to nonlinear constraints by using the same correction procedure as the generalized reduced gradient method.)
Graph 
In one context, this is the functional value and domain: , where . In another context, this is a (maybe undirected) network. In the former context, see also epigraph and hypograph. In the latter context, the notation is sometimes used to mean a graph with vertex (or node) set and edge (or link) set . We say an edge is incident with the two nodes that define it, and those two nodes are called the endpoints of the edge. The endpoints are said to be adjacent nodes.
An edge whose endpoints are the same node is called a loop. Two edges with the same endpoints are called parallel. A graph is simple if it has no loops or parallel edges. Otherwise, it is called a multigraph. The degree of a node is the number of edges incident to it (counting a loop twice). If two edges have a common endpoint, they are said to be adjacent. A path is a sequence of edges that are successively adjacent. (If the graph is simple, a path can also be represented by the associated sequence of nodes.)
When the edge is directed (or oriented), it is called an arc. When all edges are directed, the graph is called a directed graph, or digraph. With additional properties (e.g., arc weights), this is a network. A path in a digraph can traverse its arcs in either the forward or backward direction. If all arcs are forward, it is a directed path, or dipath.
Here are some special graphs of interest (not exhaustive).
 bipartite. nodes decompose into 2 sets such that every link has its endpoints in opposite sets. This applies, for example, to the standard transportation problem, where there are sources in one set and destinations in the other.
 complete. There is a link between each pair of nodes. A complete graph on n nodes is usually denoted . In the case of a bipartite graph, the term bicomplete is used to mean all possible links are there: between all pairs of nodes in the two sets. A bicomplete graph on sets with and nodes, resp., is usually denoted .
 connected. There is a path between each pair of nodes. If the graph is directed, this divides into:
 strongly connected  arcs must be traversed in forward direction;
 weakly connected  arcs can be traversed in either direction.
 cycle (denoted for nodes). The links are .
 planar. The graph can be drawn in a plane (2D) such that no two links cross (they meet only at incident nodes). This has special properties in multicommodity flows and shortest paths.
 subgraph. such that is a subset of and is a subset of .
 tree. Connected and acyclic. One problem is finding a spanning tree that is optimal in some sense.
 undirected. No link is directed.
Greedy algorithm 
Applies when the optimization problem is to decide whether or not to include some element from a given set. A greedy algorithm begins with no elements and sequentially selects an element from the feasible set of remaining elements by myopic optimization. (The elements could have been sorted by some criterion, such as associated weights.) This results in an optimal solution to the problem if and only if there is an underlying matroid structure (for example, see spanning tree). Further details are in the supplement, Greedy Algorithms.
Groebner basis 
This arises in parametric integer (linear) programming, varying the righthand side . A precise definition is involved, stemming from ideal theory in abstract algebra, and there are many other applications outside of integer programming. In the context of ILP, a Gröbner basis is a minimal set of change directions, , called a test set, such that the objective function improves along those directions. To elaborate, suppose we have
where and have integer elements. Let and (i.e., is feasible and bounded). A Gröbner basis for , holding fixed, is a set, , for which the following conditions hold:
 .
 (for minimization).
 For and , there exists such that.
Condition 1 says that if , and condition 2 says . Therefore, if is any feasible solution to either is optimal for a particular righthand side , or condition 3 ensures there is an improvement by adding some vector, .
Here is a numerical example.
Group problem 
Solving the corner polyhedron problem from the perspective of the Gomory group.
Categories: Constraint Programming  Linear Algebra & Geometry  Linear Algebra & Geometry  Nonlinear Programming  Constraint Programming  Nonlinear Programming  Linear Algebra & Geometry  Linear Programming  Nonlinear Programming  Linear Programming  MIP/CO  Nonlinear Programming  Constraint Programming  Constraint Programming  Nonlinear Programming  Nonlinear Programming  MIP/CO  MIP/CO  MIP/CO  Nonlinear Programming  MIP/CO  MIP/CO  MIP/CO  MIP/CO