All Pages

From Glossary

Jump to: navigation, search

TNP rule

See degeneracy graph.

Table constraint

See extensional constraint.


(pl. tableaux). A detached coefficient form   of a system of equations, which can change from LaTeX: \textstyle x+Ay=b to LaTeX: x'+A'y'=b'. The primes denote changes caused by multiplying the first equation system by the basis inverse (a sequence of pivots in the simplex method). Although the exact form varies (e.g., where to put LaTeX: b), the following is fairly standard:

Nonbasic Variables

Basic Variables Basic Level at lower bound at upper bound | superbasic

name (or index) LaTeX: x_B coefficients are negatives of the rates of substitution

obj LaTeX: -z reduced costs | Lagrange multiplier

(Other information could be appended, such as the original bound values.)

Tabu search

This is a metaheuristic to solve global optimization problems, notably combinatorial programs, based on multi-level memory management and response exploration. It requires the concept of a neighborhood for a trial solution (perhaps partial). In its simplest form, a tabu search appears as follows:

  • Initialize. Select LaTeX: x and set Tabu List LaTeX: T = null. If LaTeX: x is feasible, set LaTeX: x^* = x and LaTeX: f^* = f(x^*); otherwise, set LaTeX: f^* = -\infty (for minimization set LaTeX: f^* = \infty).
  • Select move. Let LaTeX: S(x) = set of neighbors of LaTeX: x. If LaTeX: S(x)\setminus T is empty, go to update. Otherwise, select LaTeX: \textstyle y \in \argmax \left \{E(v): v \in S(x)\setminus T \right \}, where E is an evaluator function that measures the merit of a point (need not be the original objective function, LaTeX: f). If y is feasible and LaTeX: f(y) > f^*, set LaTeX: x^*=y and LaTeX: f^*=f(x^*). Set LaTeX: x=y (i.e., move to the new point).
  • Update. If some stopping rule holds, stop. Otherwise, update LaTeX: T (by some tabu update rule) and return to select move.

There are many variations, such as aspiration levels, that can be included in more complex specifications.

Tangent cone

Let LaTeX: \textstyle \mbox{S} \subset R^n and let LaTeX: \textstyle x^* \in S. The tangent cone, LaTeX: \textstyle \mbox{T(S}, x*), is the set of points that have the following limit properties: LaTeX: \textstyle y is in LaTeX: \textstyle \mbox{T(S}, x*) if there exist sequences LaTeX: \textstyle a_n \ge 0 \mbox{ and } x^n \in S such that LaTeX: \textstyle \{x^n\} \to x^* \mbox{ and } \{||a_n (x^n - x^*) - y||\} \to 0. This arises in connection with the Lagrange Multiplier Rule much like the tangent plane, though it allows for more general constraints – e.g., set constraints. In particular, when there are only equality constraints, LaTeX: \textstyle h(x) = 0, \mbox{ T(S}, x*) = \mbox{null space } of LaTeX: \textstyle \nabla h(x^*) if LaTeX: \textstyle \nabla h(x^*) has full row rank. (There are some subtleties that render the tangent cone more general, in some sense, than the tangent plane or null space. It is used in establishing a necessary constraint qualification.)

Tangent plane

Consider the surface, LaTeX: \textstyle S = \{x \mbox{ in } R^n : h(x)=0\}, where LaTeX: \textstyle h \in C^1. A differentiable curve passing thru LaTeX: \textstyle x^* \in S is LaTeX: \textstyle \{x(t): x(0)=x* \mbox{ and } h(x(t))=0 \mbox{ for all } t \in (-e,e)\}, for which the derivative, LaTeX: x'(t), exists, where LaTeX: e > 0. The tangent plane at LaTeX: x^* is the set of all initial derivatives: LaTeX: \{x'(0)\}. (This is a misnomer, except in the special case of one function and two variables at a non-stationary point.) An important fact that underlies the classical Lagrange multiplier theorem when the rank of LaTeX: \textstyle \nabla h(x^*) is full row (LaTeX: x^* is then called a regular point): the tangent plane is LaTeX: \textstyle \{d: \nabla h(x^*) d = 0\}.

Extending this to allow inequalities, the equivalent of the tangent plane for a regular point LaTeX: (x^*) is the set of directions that satisfy first-order conditions to be feasible:

LaTeX: \{d: \nabla h(x*) d = 0 \mbox{ and } \nabla g_{i(x*)} d \le 0 \mbox{ for all } i: g_{i(x*)} = 0 \}.

Target analysis

This is a metaheuristic to solve global optimization problems, notably combinatorial programs, using a learning mechanism. In particular, consider a branch and bound strategy with multiple criteria for branch selection. After solving training problems, hindsight is used to eliminate dead paths on the search tree by changing the weights on the criteria: set LaTeX: w > 0 such that LaTeX: \textstyle w V_i \le 0 at node LaTeX: i with value, LaTeX: \textstyle V_i, that begins a dead path, and LaTeX: \textstyle w V_i > 0 at each node, i, on the path to the solution. If such weights exist, they define a separating hyperplane for the test problems. If such weights do not exist, problems are partitioned into classes, using a form of feature analysis, such that each class has such weights for those test problems in the class. After training is complete, and a new problem arrives, it is first classified, then those weights are used in the branch selection.

Taylor expansion

For LaTeX: \textstyle f \in C^n, Taylor's Theorem is used by dropping the remainder term. The first-order expansion is LaTeX: \textstyle f(x) = f(y) + \nabla f(x) * (x-y), and the second-order expansion is

LaTeX: \textstyle f(x) = f(y) + \nabla f(x) * (x-y) + \frac{1}{2}\, (x-y)' H_{f(x)} * (x-y).

(See Myths and Counter Examples in Mathematical Programming to avoid misconception.)

Taylor series

For a function, LaTeX: f, having all derivatives, the series is:

\sum_{k=0}^{\infty} \frac{f^{(k)} (x + h)}{k!}\, h^k,

where LaTeX: \textstyle f^{(k)} is the k-th derivative of LaTeX: f. Truncating the series at the n-th term, the error is given by:

\varepsilon_n (h) = \left \vert f(x) - \sum_{k=0}^{n} \frac{f^{(k)} (x + h)}{k!}\, h^k \right \vert.

This is a Taylor expansion, and for the Taylor series to equal the functional value, it is necessary that the error term approaches zero for each n:

\lim_{h \to 0} \varepsilon_n (h) = 0.

In that case, there must exist LaTeX: y in the line segment LaTeX: \textstyle [x,x+h] such that

\varepsilon_n (h) = \frac{f^{(n+1)} (y)}{(n + 1)!} h^{n+1} .

Taylor theorem

LaTeX: \textstyle \mbox{Let } f:(a-h, a+h) \to R \mbox{ be in } C^{n+1}. Then, for LaTeX: \textstyle x \in (a, a+h),

f(x) = f(a) + \frac{[f^1 (a)] [x-a] + \dots + [f^n (a)] [(x-a)^n]}{n!} + R_n(x, a),

where LaTeX: \textstyle R_n(x, a), called the remainder, is given by the integral:

\int_{a}^{x} \frac{(x - t)^n}{n!} f^{(n+1)} (t)\, dt.

This extends to multivariate functions and is a cornerstone theorem in nonlinear programming. Unfortunately, it is often misapplied as an approximation by dropping the remainder, assuming that it goes to zero as LaTeX: \textstyle x \to a. (See Myths and Counter Examples in Mathematical Programming to avoid misconception.)

Temporal CSP

A temporal constraint satisfaction problem (TCSP) is a constraint satisfaction problem that involves a set of variables LaTeX: \{x_1,\dots , x_n\} having continuous or discrete domains; each variable represents a time point. Each constraint is represented by a set of intervals LaTeX: \{I_1, \dots , I_k\}=\{[a_1,b_1], \dots , [a_k,b_k]\}.

A unary constraint LaTeX: T_i restricts the domain of variable LaTeX: x_i to the given set of intervals; that is, it represents the disjunction LaTeX: (a_1\leq x_i\leq b_1) \vee \dots \vee >(a_k\leq x_i\leq b_k) .

A binary constraint LaTeX: T_{ij} restricts the permissible values for the distance LaTeX: x_j-x_i; it represents the disjunction LaTeX: (a_1\leq x_j-x_i\leq b_1) \vee \dots \vee >(a_k\leq x_j-x_i\leq b_k) .

In canonical form, all intervals of a constraint are pair-wise disjoint.

Theorem of alternative

One that establishes two systems are alternatives.

Tight constraint

Same as active constraint, but some authors exclude the redundant case, where an inequality constraint happens to hold with equality, but it is not binding.

Time staged

A model with a discrete time parameter, LaTeX: t=1,\dots,T, as in dynamic programming, but the solution technique need not use the DP recursion. The number of time periods LaTeX: (T) is called the planning horizon.

Tolerance approach

This is an approach to sensitivity analysis in linear programming that expresses the common range that parameters can change while preserving the character of the solution. In particular, suppose LaTeX: B is an optimal basis and rim data changes by LaTeX: (Db, Dc). The tolerance for this is the maximum value of LaTeX: t for which LaTeX: B remains optimal as long as LaTeX: \textstyle |Db_i| \le t for all LaTeX: i and LaTeX: \textstyle |Dc_j| \le t for all LaTeX: j. The tolerance for the basis, LaTeX: B, can be computed by simple linear algebra, using tableau information.


Small positive values to control elements of a computer implementation of an algorithm. When determining whether a value, LaTeX: v, is non-negative, the actual test is LaTeX: v > -t, where LaTeX: t is an absolute tolerance. When comparing two values to determine if LaTeX: u \le v, the actual test is

u - v \le t_a + t_r \left |v \right |,

where LaTeX: t_a is the absolute tolerance (as above), and LaTeX: t_r is the relative tolerance (some make the relative deviation depend on LaTeX: u as well as on LaTeX: v, such as the sum of magnitudes, LaTeX: \textstyle \left |u \right | + \left |v \right | ). Most MPSs have a tolerance for every action it takes during its progression; in particular, one zero tolerance is not enough - one to test feasibility is usually less than one that is used to determine an acceptable pivot element. In fact, the use of tolerances is a crucial part of an MPS, including any presolve that would fix a variable when its upper and lower bounds are "sufficiently close" (i.e., within some tolerance). A tolerance is dynamic if it can change during the algorithm. An example is that a high tolerance might be used for line search early in an algorithm, reducing it as the sequence gets close to an optimal solution. The Nelder-Mead simplex method illustrates how tolerances might change up and down during the algorithm.

Another typical tolerance test applies to residuals to determine if LaTeX: x is a solution to LaTeX: Ax=b. In this case, the residal is LaTeX: r=Ax-b, and the test has the form:

\left ||r \right || \le t_a + t_r \left ||b \right ||,

where LaTeX: \left ||* \right || is some norm. Further details are in the note on Tolerances.

Topological sort

This sorts the nodes in a network such that each arc, say k-th, has LaTeX: \textstyle \mbox{Tail}(k) < \mbox{Head}(k) in the renumbered node indexes. This arises in a variety of combinatorial programs, such as those with precedence constraints. If the nodes cannot be topologically sorted, the network does not represent a partially ordered set. This means, for example, there is an inconsistency in the constraints, such as jobs that cannot be sequenced to satisfy the asserted precedence relations.

Totally unimodular matrix

See unimodular matrix.

Transportation problem

Find a flow of least cost that ships from supply sources to consumer destinations. This is a bipartite network, LaTeX: N = [S*T, A], where LaTeX: S is the set of sources, LaTeX: T is the set of destinations, and LaTeX: A is the set of arcs. In the standard form, LaTeX: N is bi-complete (A contains all arcs from LaTeX: S to LaTeX: T), but in practice networks tend to be sparsely linked. Let LaTeX: c(i, j) be the unit cost of flow from LaTeX: i \in S to LaTeX: j \in T, LaTeX: s(i) = supply at i-th source, and LaTeX: d(j) = demand at j-th destination. Then, the problem is the linear program:

\min & \multicolumn{3}{l}{\sum\limits_{(i,j) \in S \times T} c_{i,j} x_{i,j}} \\ \\
<pre>& \sum\limits_{j \in T} x_{i,j} & \le & s_i \mbox{ for all } i \in S, \\
& \sum\limits_{i \in S} x_{i,j} & \ge & d_j \mbox{ for all } j \in T, \\
& x & \ge & 0.

The decision variables LaTeX: (x) are called flows, and the two classes of constraints are called supply limits and demand requirements, resp. (Some authors use equality constraints, rather than than the inequalities shown.) An extension is the capacitated transportation problem, where the flows have bounds: LaTeX: \textstyle x \le U.

Transposition theorem

Same as a theorem of the alternative.

Transshipment problem

This is an extension of the transportation problem whereby the network is not bipartite. Additional nodes serve as transshipment points, rather than provide supply or final consumption. The network is N = [V, A], where V is an arbitrary set of nodes, except that it contains a non-empty subset of supply nodes (where there is external supply) and a nonempty subset of demand nodes (where there is external demand). A is an arbitrary set of arcs, and there could also be capacity constraints.

Traveling salesman problem

Given LaTeX: n points and a cost matrix, LaTeX: [c_{ij}], a tour is a permutation of the LaTeX: n points. The points can be cities, and the permutation the visitation of each city exactly once, then returning to the first city (called home). The cost of a tour, LaTeX: <i_1, i_2, \dots, i_{n-1}, i_n, i_1>, is the sum of its costs:

c_{i_1 i_2} + c_{i_2 i_3} + \dots + c_{i_{n-1} i_n} + c_{i_n i_1},

where LaTeX: (i_1, i_2, \dots, i_n) is a permutation of LaTeX: \{1,\dots,n\}. The TSP is to find a tour of minimum total cost. The two common integer programming formulations are:

ILP: LaTeX: \min \left\{ \sum_{ij} c_{ij} x_{ij} : x \in P, \, x_ij \in \{0, 1\} \right\}

Subtour elimination constraints: LaTeX: \sum_{i,j \in V} x_{ij} \le |V| - 1 \mbox{ for } \empty \ne V \subset \{1, \dots , n \} (V \ne \{ 1, \dots, n \} ),

where LaTeX: 
x_{ij} = \begin{cases}
1 & \mbox{ if city } j \mbox{ follows city } i \mbox{ in the tour} \\
0 & \mbox{ otherwise}

QAP: LaTeX: 
\min \left\{ \sum_{ij} c_{ij} \left( \sum_{k=1}^{n-1} x_{ik} x_{j \, k+1} + x_{i n} x_{j 1} \right) : x \in P, \, x_{i j} \in \{0 , 1\}
\right\} ,

where LaTeX: 
x_{(i, j)} = \begin{cases}
1 & \mbox{ if city } j \mbox{ follows city } i \mbox{ in the tour} \\
0 & \mbox{ otherwise}

In each formulation, LaTeX: P is the assignment polytope. The subtour elimination constraints in ILP eliminate assignments that create cycles, such as shown in the following figure:


The first subtour is eliminated by LaTeX: V = \{1, 2, 3\}, which requires LaTeX: x_{12} + x_{23} + x_{31} \le 2. The second subtour is eliminated by LaTeX: V = \{4, 5\}, which requires LaTeX: x_{45} + x_{54} \le 1.

In ILP, it is easy to incorporate missing arcs: replace the subscripts LaTeX: (ij) with an arc index, LaTeX: k. Then, the subtour elimination constraints have sums over arcs whose head is in LaTeX: V. The quadratic assignment problem formulation (QAP) is well suited to certain solution methods, such as neural networks.

When the cost matrix is symmetric, it is called the symmetric TSP. When the points are in the plane, and the cost matrix is the Euclidean distance matrix, it is called the Euclidean TSP. See the n-Opt heuristic.

Triangle inequality

A property of a distance function: LaTeX: f(x, y) \le f(x, z) + f(z, y) \mbox{ for all } x, y, z.

Triangular matrix

A square matrix, LaTeX: A, is called upper triangular if all elements are zero below the main diagonal -- i.e., LaTeX: \textstyle A_{(i,j)} = 0 \mbox{ for } i > j. It is called lower triangular if its transpose is upper triangular. We sometimes call a matrix triangular if it is either lower or upper triangular.

Trim problem

See cutting stock problem.

Truncated gradient

Projection of the gradient of a function, LaTeX: f \in C^1 on a box, LaTeX: [a, b], to put zeroes in coordinates if the sign of the partial derivative is negative at the lower bound or it is positive at the upper bound. This yields a feasible direction, but it does not converge due to the zig-zag phenomenon.

Trust region method

The iteration is defined as LaTeX: x' = x + p, where LaTeX: p is the (complete) change determined by a subproblem of the form:

\max \{ F(p) : \| p \| \le D \},

where F depends on the iterate and is an approximation of the change in objective function value. The particular norm and the magnitude of D determine the set of admissible change values (p), and this is called the trust region. A common choice of F is the quadratic form using the Taylor expansion about the current iterate, x, as:

F(p) = p^T \nabla f(x) + \frac{1}{2} p^T H_f(x) p.

Using the Euclidean norm and applying the Lagrange Multiplier Rule  to the subproblem yields p from the equation:

(H_{f(x)} - u I) p = - \nabla f(x) \mbox{ for some } u \ge 0.

Note that for u=0, the iteration is Newton's method, and for very large u, the iteration is nearly Cauchy's steepest ascent.

Personal tools