# All Pages

### From Glossary

FTRAN |

This applies to the factored system,

where each is an elementary matrix. The recursion is:

In the end, is the solution. Each elementary system is solved as follows. For notational convenience, suppose the system is , and is the distinguished -th column of . Then,

This is what is done in the (revised) simplex method, and each elementary solution is a pivot operation.

Face |

A convex subset, say , of a convex set, say , such that for any and in

The set is itself a face of , and most authors consider the empty set a face. The faces of zero dimension are the extreme points of . When is a polyhedron, i.e. , the faces are the subsystems with some inequalities holding with equality: , where and .

Facet |

A face of a convex set, not equal to the set, of maximal dimension. If the set is polyhedral, say , where the defining inequalities are irredundant, then the facets are in one-to-one correspondence with for such that the equality is not forced – i.e., there exists in for which . Here is the i-th row of .

Facet-defining inequality |

In integer programming, an inequality, , such that is a facet of the integer polyhedron, .

Facility location problem |

Find a location (say x-y coordinates) that minimizes a weighted sum of distances to each of several locations. An example is to locate a hospital or school to serve a community defined by population centers. Originally considered by Fermat in the 17th century, there are now many variations of this problem. One variation is the location of an *obnoxious facility*, such
as a garbage dump. In that case the objective is to maximize total (weighted) distance from the given points (there are constraints, such as cost, that require the location to be within some distance of the points).

Factorable function |

This has a recursive definition, relative to a
collection of *elementary operations* (e.g., sum, product) on
other factorable functions, starting with some set of *primitive
functions*. For example, starting with the power functions,
, for any and , and
allowing only the sum operation, we can
obtain all polynomials. The polynomial
*factors* into 3 parts.

Factorable program |

All functions (objective and constraints) are factorable. An example is the separable program, provided that each of the univariate functions is factorable. Another is the geometric program, where each posynomial is factorable.

Factored form of basis |

Originally for linear programming, this pertains
to writing a basis in
the form, , where are *factors*. Two
forms of interest are: *elementary factors*, where each is
an elementary matrix, and LU decomposition:
, where is lower triangular and is upper triangular.
( and might be factored, notably into elementary matrices, which are
lower and upper triangular, resp.) These have inexpensive updates
when changes to an adjacent basis.

Fail first principle |

The *fail first principle* is an idea of what a variable ordering heuristic should attempt to do in tree search. It suggests that the variable to be assigned next should be the one which is most likely to lead to a dead-end. The heuristic aims at proving that the search is in a sub-tree with no feasible solutions as soon as possible.

One way of operationalizing this principle, is to choose the next variable to assigned to be the variable which is the most constrained. The extent to which a variable is constrained can be measured in different ways. One simple measure being the current size of the domain. Under this measure, the variable which has the smallest domain should be assigned next. Another common measure is to select the variable that appears in the largest number of constraints. More sophisticated estimates exist.

Faithfully convex |

A convex function is faithfully convex if it is constant along some segment only if it is constant along the whole line containing this segment.

Fallacy of averages |

Imagine standing with one foot on a keg of ice and the other in a fire. Your average
body temperature may be fine, but the extremes will kill you! The
*fallacy of averages* is the fallacious results you may get
when replacing data with their expected values. Formally, the
fallacy is stated as –
viz., the covariance is not zero. Another form of this fallacy is
that (unless is linear). In
particular, suppose we have

where is a vector of parameters with some uncertainty. The fallacy of averages suggests that it is a mistake to replace with its expected value for at least 2 reasons: (1) members of may be correlated, and (2) the average values of and need not equal the functional values at the mean.

Farkas lemma |

This result gives an alternative system, of which there are many. Farakas' Lemma states that exactly one of the following is consistent,

**exclusive or**) .

An often used variant is

**exclusive or**) .

Fathom |

Arises in directed tree search. A node is
*fathomed* if it is determined that no
completion from this point could produce a better solution than one
already obtained. This could happen by bounding the objective, or
it could happen by determining there is no feasible solution with the
partial specifications corresponding to the node.

Feasibility map |

The point-to-set map that maps a right-hand side to a feasible region:

Feasible |

A point is feasible if it satisfies all
constraints. The *feasible region* (or *feasibility
region*) is the set of all feasible points. A mathematical
program is feasible if its feasible region is not empty.

The term feasible doesn't imbue other properties such as convexity or connectedness. For example, a feasible region to a nonlinear program could be , which is the disjoint union .

Feasible direction |

Given a feasible point , a direction
vector, say , is feasible if there exists such that
is feasible. The *method of feasible directions* is designed
to choose a feasible direction at each iteration, along which (as
becomes positive) the objective value improves. Such directions
exist for continuous mathematical programs (where the line segment
is feasible for some ) unless
is a local optimum. (Note:
with nonlinear constraints, there is typically no feasible
direction according to this (classical) definition. See the
generalized reduced gradient method.)

Fermat-Weber problem |

See facility location problem. .

Fibonacci search |

This finds the maximum of a
unimodal function on an interval, ,
by evaluating points placed according to a fibonacci sequence,
. If there are points in the interval, only
evaluations are needed. In the continuous case, we begin with
some *interval of uncertainty*, , and we reduce its
length to . The ratio, , is
the key to the placements.

Here is the method for the continuous case:

*Initialization*. Let and . Evaluate and and set .*Iteration*. If , reduce the interval to (i.e., set ), decrement to , and set and . If , reduce the interval to (i.e., set ), decrement to , and set and .

The fibonacci search method minimizes the maximum number of evaluations needed to reduce the interval of uncertainty to within the prescribed length. For example, it reduces the length of a unit interval to 1/10946 = 0.00009136 with only 20 evaluations. In the case of a finite set, fibonacci search finds the maximum value of a unimodal function on 10,946 points with only 20 evaluations, but this can be improved – see lattice search.

For very large , the placement ratio approaches the golden mean, and the method approaches the golden section search. Here is a comparison of interval reduction lengths for fibonacci, golden section and dichotomous search methods. In each case is the number of evaluations needed to reduce length of the interval of uncertainty to . For example, with 20 evaluations dichotomous search reduces the interval of uncertainty to of its original length (with separation value near ). The most reduction comes from fibonacci search, which is more than an order of magnitude better, at . Golden section is close (and gets closer as N gets larger).

Evaluations Fibonacci Dichotomous Golden section N 1/F_N 1/2^(N/2) 1/(1.618)^(N-1) ======================================================================== 5 0.125 0.25 0.146 10 0.0112 0.0312 0.0131 15 0.00101 0.00781 0.00118 20 0.0000914 0.000976 0.000107 25 0.00000824 0.0002441 0.0000096 ========================================================================

Fibonacci sequence |

The numbers satisfying: , with initial conditions . As shown in the following table, the fibonacci sequence grows rapidly after , and becomes astronomical for ,

N F_N ============== 0 1 1 1 2 2 3 3 4 5 5 8 6 13 7 21 8 34 9 55 10 89 20 10946 50 2.0E10 100 5.7E20 ==============

Named after the 13th century mathematician who discovered it, this sequence has many interesting properties, notably for an optimal univariate optimization strategy, called fibonacci search.

Fill-in |

Arises in the context of sparse matrices
subjected to certain operations. In particular, a basis may be
factored in a product of elementary matrices
to represent Gaussian elimination. The nonzeroes
that appear in positions that were not in the original matrix are called *fill-in*
coefficients. An example of a matrix that has no fill-in for this factorization
is one that is lower triangular. In that case the
factors appear as:

On the other hand, a lower triangular matrix could cause fill-in for some other factorization, such as the Cholesky factorization. A completely dense matrix also has no fill-in since there were no zeroes to begin with. Here is an example of fill-in, taking the original order of rows and columns in the product form:

where is fill-in.

First-order conditions |

This descends from classical optimization, using Taylor's Theorem. For unconstrained optimization, this is simply that . (In variational calculus, it is the Euler-Lagrange equation.) For constrained optimization, the first-order conditions of a regular mathematical program is given by the Lagrange Multiplier Rule.

Fixed charge |

A cost that is some value, say , regardless of the level as long as the level is positive; otherwise the fixed charge is zero. This is represented by , where is a binary variable. If , the fixed charge is 0; if
, the fixed charge is . An example is whether to open a plant
or not . To apply this fixed charge to the non-negative variable , the constraint is added to the mathematical program, where is a very large value, known to exceed any feasible value of . Then, if (e.g., not opening the plant that is needed for
), is forced by
the upper bound constraint. If (e.g., plant is open), is a redundant upper bound. *Fixed charge problems* are mathematical programs with fixed charges.
(See
Myths and Counterexamples in Mathematical Programming
to avoid a misconception.)

Fixed point |

The fixed point of a function, , is an element such that . Of a point-to-set map, , we instead have that . The study of fixed points has been at the foundation of algorithms. Moreover, forcing substructure.

Fixed variable |

A variable whose value is fixed, either explicitly or implicitly. Explicit fixing arises for convenience of scenario design, particularly in linear programming, and during an algorithm's progression, particularly in integer programming. Implicit fixing occurs from a forcing substructure.

Fleet mix problem |

To determine how many of each type of aircraft should be in a fleet to meet demands and minimize total cost. Here is an integer linear program model:

where is the number of aircraft of type available; is the capacity of aircraft type for mission ; is the least number of missions of type that must be flown; is the greatest number of missions of type that must be flown. The variables are are the number of aircraft of type in the fleet, and is its maintenance cost. If the aircraft must be purchased, binary variables are introduced, as , with a fixed charge, , in the objective . There could be additional constraints, such as a budget on total purchases or on total maintenance .

Floor |

This is the integer round-down of a value, :

Examples: , , . Also, see ceiling.

Flow augmenting path |

This arises in the Ford-Fulkerson labeling algorithm to find a maximum flow in a network with arc flow bounds, and . Given a flow, , from a source to a sink, a flow augmenting path, relative to , is a path from that source to the sink such that

along all forward arcs, and

along all backward arcs.

The flow, , is a maximum flow from the source to the sink if and only if there is no flow augmenting path relative to . If there is a flow augmenting path, it can be changed along the path such that total flow increases (by at least 1 if all data are integer).

Forced equality |

This is an inequality constraint that must hold with equality in every feasible solution.

Forcing substructure |

A subsystem of the constraints and the variables in them that forces some value. For example, the following system is a forcing substructure because each variable, which is required to be non-negative, is forced to be zero.

Forward checking |

Forward checking is a propagation procedure that guarantees that at each step of the search, all the constraints between already assigned variables and not yet assigned variables are arc consistent.

Formally, let be a binary constraint network and such that for all . is forward checking consistent according to the instantiation on iff is locally consistent and for all , for all , for all , is arc consistent.

Forward substitution |

The recursion to solve a nonsingular lower triangular system, . It starts with , then

for .

Forward transformation |

See FTRAN.

Forward triangularization |

An algorithm to rearrange a matrix by recursively assigning a singleton row to the front (with its only column, as its pivot). The recursion applies to the submatrix defined by omitting the assigned row and column (and reducing other row counts accordingly). This results in a lower triangular rearrangement if and only if such a rearrangement exists.

Fourier-Motzkin elimination |

A procedure by which a variable is eliminated in a system of linear inequalities. The remaining inequalities, which include some new ones, has a solution if and only if the original system has a solution. Eventually, this reduces to one variable or an inconsistency is determined during the elimination. The one variable can be assigned any value in its range, then a backtracking procedure can be executed to obtain values of the other variables, which were eliminated. Originally presented by Fourier in 1826, Motzkin analyzed it in 1933 in light of new developments of the theory of linear inequalities to solve linear programs. Its computational complexity is exponential, and it gave way to the simplex method.

Fractional program |

Objective and/or constraint functions are sums of ratios of the
form , where typically over
some domain . The case of one term is special, and the *linear fractional
program* has affine numerator and
denominator. The linear 1-term case,
(where over the feasible region), is equivalent
to solving a parametric linear program:

Frank-Wolfe Theorem |

If a quadratic function is bounded from below on a nonempty polyhedron, it attains a minimum there. In mathematical notation, we are given that is a nonempty polyhedron. Then, if there exists such that for all , there exists such that for all .

Free variable |

A variable with no (explicit) sign restriction. (The term is generally used in linear programming because the standard form has x >= 0.)

Fritz John conditions |

These extend Carathéodory's conditions to deal with inequality constraints:

- Suppose is in
, where
and are in smooth. Then, there exists multipliers
, not all zero, such that

Fuzzy CSP |

In a fuzzy constraint satisfaction problem each constraint and instantiation is assigned a *degree of satisfaction* , which is a value in the [0,1] real interval indicating the *degree* that is satisfied by . If this value is 1, is satisfied and if this value is 0, is violated. In the most common interpretation of fuzzy CSPs, the task is to find an instantiation for which the minimum of with ranging over all the constraints (i.e. the smallest degree of satisfaction for the instantition ) is maximal.

Fuzzy math program |

Some (or all) constraints are *fuzzified* by
replacing the level set, , with the requirement that
, where is a membership function that is given (or derived from
primitive membership functions on the level sets of each , and
is a parameter in to be set for each instance of the
model. Typically (but not necessarily), means that the
constraint must hold, and the lower the value of , the more 's
are admitted. (This appears similar to the chance-constraint model
of stochastic programming, but it is more closely related to
goal programming.) While
fuzzy sets are used to represent uncertainty, they can also be used
to represent preferences, e.g. a requirement, could
mean that it is preferred that be in at least as much as
it is preferred that be in .

Fuzzy set |

Given a universe set, , and a *membership
function*, , a fuzzy set is a collection of pairs:
. Often, the membership function is subscripted
by the set name, say . Generally, for all , , and
. In the context of uncertainty, the value is
used to model the statement of how *possible* it is for to
be in . For this reason, is sometimes called the
*possibility function* of . What we consider the usual set
(without a membership function) is called a *crisp set* in
fuzzy mathematics.

Fuzzy set operations, say on fuzzy sets and , with membership functions and , resp., are defined by the following:

*Union*: .*Intersection*: .*Complement*: .

One must be careful when using fuzzy sets to represent uncertainty (which is not the only type of interpretation – see fuzzy mathematical program). In particular, if , its complement is also . Thus, , despite the fact that (in ordinary set theory). Similarly, , despite the fact that . This illustrates the fact that need not equal even if as crisp sets.

While the fuzzy set is fundamental for fuzzy mathematical programming, other concepts in fuzzy mathematics also apply, such as fuzzy arithmetic, fuzzy graphs and fuzzy logic.

Categories: Linear Programming | Linear Algebra & Geometry | Linear Algebra & Geometry | MIP/CO | Common Problems | MIP/CO | Nonlinear Programming | Nonlinear Programming | Linear Programming | Constraint Programming | MIP/CO | Linear Programming | Nonlinear Programming | Nonlinear Programming | Nonlinear Programming | Nonlinear Programming | Common Problems | MIP/CO | Nonlinear Programming | Linear Programming | Common Problems | MIP/CO | Linear Programming | Linear Programming | Constraint Programming | Linear Algebra & Geometry | Linear Programming | Linear Programming | Linear Programming | Nonlinear Programming | Nonlinear Programming | Linear Programming | Nonlinear Programming | Constraint Programming | Nonlinear Programming