All Pages

From Glossary

Jump to: navigation, search

Economic order quantity

Abbreviated EOQ. This is the level of inventory to order that minimizes the sum of holding and ordering costs. The inventory function, LaTeX: I(t), is the following periodic sawtooth function, where LaTeX: T is the time between orders, and LaTeX: Q is the ordering quantity:

LaTeX: I(t) = Q - dt \; for LaTeX:  \; 0 \le t \le T,

where LaTeX: d is the rate of demand (inventory units per time units), and LaTeX: I(t) = I(t-T) for LaTeX: t > T. The inventory becomes zero at LaTeX: T = Q/d, which requires a new order of LaTeX: Q units. The model is thus:

LaTeX: \min \; (1/2) hdT + K/T,

where LaTeX: h is the holding cost (currency per time units), so LaTeX: (1/2) hdT is the average holding cost, and LaTeX: K is the fixed cost of ordering, so LaTeX: K/T is the average ordering cost. The solution is LaTeX: T^* = (2K/hd)^{1/2}, which yields the Economic Order Quantity (EOQ): LaTeX: Q^* = (2Kd/h)^{1/2}. See the more general production scheduling problem.


The edge-finding rule is a constraint propagation technique that consists of deducing whether an activity LaTeX: a_i can, cannot, or must be executed before or after a set of activities LaTeX: \Omega, a_i \notin \Omega that all (including LaTeX: a_i) require the same resource. Two types of conclusions can then be drawn: new ordering relations (“edges” in the graph representing the possible orderings of activities) and new time-bounds (i.e., tightening of the earliest and latest start and end times of activities).

An edge-finding algorithm is a procedure that performs all such deductions.

Effective domain

The domain of a function for which its value is finite.

Efficient frontier

The collection of efficient points, see multiple objectives.


If a (scalar) value, LaTeX: \lambda, satisfies LaTeX: Ax = \lambda x for some vector, LaTeX: x \neq 0, it is an eigenvalue of the matrix LaTeX: A, and LaTeX: x is an eigenvector. In mathematical programming this arises in the context of convergence analysis, where LaTeX: A is the hessian of some merit function, such as the objective or Lagrangian.

Elastic program

Same as Phase I, the artificial variables are called elastic with the idea that the constraints can "stretch".

Element constraint

is a global constraint that is true iff a variable LaTeX: z is the LaTeX: yth element of an array LaTeX: X. More specifically, it involves a 1-dimensional variable array LaTeX: X = \langle x_1, \dots, x_n \rangle, an integer variable LaTeX: y and variable LaTeX: z, where LaTeX: X[y] = z, typically written as "element(X,y,z)".

Elementary matrix

A matrix formed by performing a single row operation on the identity matrix. These arise in linear programming, particularly for the product form of the basis: LaTeX: B = E_1 E_2 ... E_m, where each LaTeX: E_i is an elementary matrix.

Elementary simplex method

See simplex method.

Elementary vector

Let LaTeX: V be a subspace of LaTeX: \mathbb{R}^n. For LaTeX: v \in S, let LaTeX: S(v) denote its support set: LaTeX: \{j: v_j \neq 0\}. Then, LaTeX: v is an elementary vector of LaTeX: V if there does not exist LaTeX: v' \in V such that LaTeX: v' \neq 0 and LaTeX: S(v') \subseteq S(v). This extends to where LaTeX: V is not a subspace, such as the collection of non-negative vectors inLaTeX: \mathbb{R}^n, in which case LaTeX: S(v) is the set of coordinates with positive value. This is of particular importance in defining the optimal partition of an LP solution.


An ellipsoid centered at LaTeX: c is the set

LaTeX: E(c,Q) = \{x: (x-c)^T Q (x-c) \le 1\},

where LaTeX: Q is a symmetric, positive definite matrix.

Ellipsoid method

This algorithm seeks to solve LaTeX: Ax \le b by iterations that begin with LaTeX: x=0 and LaTeX: Q=\mbox{diag}(2^{-L}), where LaTeX: L is a measure of the smallest datum that can be represented -- i.e., LaTeX: x is a solution if, and only if, it satisfies LaTeX: Ax \le b + 2^{-L}. If the current iterate, LaTeX: x, does not satisfy this, a new pair LaTeX: (x', Q') is defined by choosing one of the violated inequalities, say LaTeX: A_{i,*}x > b_i. Then, let LaTeX: v = A_{i,*}^T, and apply the following rules:

  1. LaTeX:  x' = \frac{1} {(n+1)\sqrt{v'Qv}} \; (x - Qv)
  2. LaTeX:  Q' = \frac{n^2}{(n^2 - 1)(n+1) (v^T Q v)} \; (Q - 2 v v^T)
Now LaTeX: E(x',Q') defines an ellipsoid centered at LaTeX: x', and it contains all the feasible points in LaTeX: E(x,Q). After at most LaTeX: O(n^2 L) iterations, the algorithm terminates with a solution, or with the fact that no solution exists.

By treating the objective as a constraint, say LaTeX: c^T x \le c^Tx^* - e and/or LaTeX: c^T x \ge c^Tx^* + e, the ellipsoid method can be used to solve a linear program. Its significance is that it was the first algorithm for linear programming shown to have polynomial time complexity.


This arises in semi-definite programming. It is the set of all real, square symmetric matrices whose diagonal entries are all equal to one and whose eigenvalues are all non-negative. The dimension of the set is LaTeX: n(n-1)/2 (where the matrix is LaTeX: n \times n). For example, consider LaTeX: n=2:

LaTeX: M = \left[ \begin{array}{cc} 1 & a \\ a & 1 \end{array} \right].

Then, LaTeX: M is in the elliptope for LaTeX: n=2 if, and only if, LaTeX: -1 \le a \le 1, i.e. it is diagonally dominate (note the dimension is 1). An elliptope is also called a set of correlation matrices.


This is the region on or above the graph of a function LaTeX: f:

LaTeX: \mbox{epi}(f,X) = \{(x, z): x \in X, \; z \ge f(x)\}.

Equilibrium basis

In linear programming this is a basis that is optimal in both the primal and dual. It arises in compatibility theory and degeneracy graphs.

Equivalent CSPs

Consider two constraint satisfaction problems LaTeX: P_1 and LaTeX: P_2 and a sequence LaTeX: X of their common variables. We say that LaTeX: P_1 and LaTeX: P_2 are equivalent w.r.t LaTeX: X if

  • for every solution LaTeX: d to LaTeX: P_1 a solution to LaTeX: P_2 exists that coincides with LaTeX: d on the variables in X, and
  • for every solution LaTeX: e to LaTeX: P_2 a solution to LaTeX: P_1 exists that coincides with LaTeX: e on the variables in X.

Euclidean norm

See norm.

Euler-Lagrange equation

This is a necessary condition for LaTeX: y to solve the classical problem in variational calculus:

LaTeX:  \mbox{div}(F_{\zeta} (x, y(x), \nabla y(x))) = F_{\lambda}(x, y(x), \nabla y(x)),

where LaTeX: F(x, \lambda, \zeta): 
\Omega \times \mathbb{R} \times \mathbb{R}^n \rightarrow \mathbb{R}.

Evolutionary algorithm

This is a term that applies to any metaheuristic based on some biological metaphor that is a dynamical system whose evolution rules have a stochastic component. In particular, it includes genetic algorithms, scatter search, Particle Swarm Optimization, and some neural networks. It emerged as a unification of related computational methods, including cellular automata, which was conceived by Ulam and von Neumann in the 1940s. This unification is not universal. Some researchers cling to the GA metaphor but say that evolutionary algorithms have more operators; some say that the GA population is composed of bit strings, whereas evolutionary algorithms can have a broader range of population structures. It is the concept of evolution dynamics, which is population-based and stochastic, that distinguishes this class of algorithms from other methods of global optimization. Evolutionary computation uses the evolutionary algorithm as its foundation; typically, it is massively parallel.

Exact penalty function

A penalty function is exact if there exists a finite parameter value such that its maximum is a solution to the original mathematical program.

Existence of solution

This pertains to whether a mathematical program has a solution. One source of non-existence is that the feasible region is empty. Another is that it is unbounded, and it is possible to have a sequence of feasible points whose objective value diverges. In linear programming, these are the only sources of non-existence. In nonlinear programming, however, we can have asymptotes such that the objective is bounded but cannot achieve its infimum or supremum. See the Bolzano-Weierstrass theorem, which is fundamental. For linear programming, see theorems of the alternative.

Expanding subspace theorem

Let LaTeX: S_k be the subspace spanned by vectors LaTeX: d^k, which are Q-conjugate. Let LaTeX: x^0 be any initial point in LaTeX: \mathbb{R}^n, and let LaTeX: x^k be generated by these conjugate directions with optimal line search:

LaTeX: \displaystyle x^{k+1} = x^k + s_k d^k \;\; \mbox{ and } \;\;
<pre>s_k = \frac{-(Qx^k +c)}{(d^k)^T Q d^k}.

Then, LaTeX: x^k minimizes LaTeX: x^{T} Q x + c^{T} x on the affine set, LaTeX: x^0 + S_k.

Explicitly quasiconcave function

Negative is explicitly quasiconvex.

Explicitly quasiconvex function

A quasiconvex function that is also strictly quasiconvex. Its main property is that it rules out flat portions that cause the closure condition to fail. (Every convex function is explicitly quasiconvex.)

Exponent matrix

Powers of variables in posynomials in a geometric program.

Extended reals

Typically denoted by LaTeX: \mathbb{R}^*, this is the set of real numbers augmented with positive and negative infinity, i.e. LaTeX: \mathbb{R}^{*} = \mathbb{R} \cup \{\infty, -\infty\}

Extensional constraint

Constraints on a set of variables LaTeX: X =\{x_1, \dots, x_n\} can be stated in an extensional fashion by specifying a table that summarizes all allowed (or in an alternative formulation, all disallowed) combinations of assignments to the variables in LaTeX: X.

For instance, the extensional constraint LaTeX: table(\{x,y\},\{[1,2],[1,3],[2,3]\}) expresses the intensional constraint LaTeX: x < y where LaTeX:  x and LaTeX: y can take values LaTeX: \{1,2,3\}.

Also called a table constraint.

Exterior penalty function

A penalty function in which the associated algorithm generates infeasible points, approaching feasibility in the limit towards a solution. An example is to maximize LaTeX: f(x) - u h(x)^2 over LaTeX: x \in X and let LaTeX: u \rightarrow \infty in order to approach the solution to

LaTeX: \max \{f(x): x \in X, \; h(x)=0\}.

If, during the penalty iterations, LaTeX: h(x^*)=0 for some finite LaTeX: u, then LaTeX: x^* solves the original mathematical program. Otherwise, the idea is that LaTeX: x^*(u) approaches the feasibility condition, LaTeX: h(x^*)=0, as LaTeX: u gets large (though this need not happen without assumptions on LaTeX: f and LaTeX: h).


The idea of estimating a value by extending information at hand outside its immediate range. In linear programming, an extrapolation estimate of the optimal objective value uses dual price LaTeX: (y) as a slope: LaTeX: z(b + h) = z(b) + yh. For a sequence LaTeX: x^k, an extrapolation is an estimate of a limit point.

Extreme point

A point in the closure of a set, say LaTeX: S, that is not the midpoint of any open line segment with end points in closure of LaTeX: S. Equivalently, LaTeX: x is an extreme point of a closed set, LaTeX: S, if there do not exist LaTeX: y, z \in S \backslash {x} for which LaTeX: \textstyle x = (y + z)/2. When LaTeX: S is a polyhedron of the standard form, LaTeX: S=\{x: Ax=b, \; x \ge 0\}, with LaTeX: A of full row rank, we have one of the fundamental theorems of linear programming that underlies the simplex method:

LaTeX: x is an extreme point of the feasible region if, and only if, LaTeX: x is a basic feasible solution.

Extreme ray

An extreme ray of a closed set LaTeX: S is a ray in LaTeX: S that cannot be expressed as a (simple) sum of other rays in LaTeX: S. For example, the axes, LaTeX: \{t e_j: t \ge 0\}, are extreme rays of the non-negative vectors in LaTeX: \mathbb{R}^n. The ray LaTeX: \{t e: t \ge 0\}, however, is the sum of the axes since LaTeX: (t,...,t) = t e_1 + \ldots + t e_n. The recession direction of an extreme ray is sometimes called an extreme direction.

Extreme value

An extremum (pl. extrema). A minimum or maximum.

Personal tools