All Pages
From Glossary
CPLEX 
A collection of mathematical programming software solvers.
CSP 
See constraint satisfaction problem.
Calculus of variations 
See variational calculus.
Capacity expansion 
This is an augmentation to any mathematical program that has constraints representing a capacity limit. Suppose represents the limitation of using more than units of capacity, where is the actual amount used for policy . Then, this becomes a capacity expansion model by replacing the constraint with
Capital budgeting problem 
In its elementary form there is a fixed amount of capital, say , that can be allocated to any of investments. Each investment has a minimum level, say , and a maximum level, say . The expected return on investment is a function, , where is the level of the th investment opportunity (). Risk is measured by a standard deviation from the expected return, say . The problem is to maximize total expected return, subject to a budget constraint: , and a risk constraint: , where and are parameters. The returns on the investments could be correlated. Then, if is the variancecovariance matrix, the risk constraint is quadratic: . (Also see the portfolio selection problem.)
Caratheodory conditions 
For the classical Lagrange form, , where and are smooth, the following conditions are necessary for a feasible to be optimal: there exists , called multipliers, such that
This reduces to the Lagrange Multiplier Rule when is not zero (divide by ), which must be the case if has full row rank.
Caterer problem 
A caterer has booked his services for the next days. He requires fresh napkins on the tth day, . He sends his soiled napkins to the laundry, which has 3 speeds of service: , or days. The faster the service, the higher the cost, , of laundering a napkin. He can also purchase new napkins at a cost, . With an initial stock of napkins, the caterer wishes to minimize his total cost. (This can be formulated as a transportation problem.)
CauchySchwarz inequality 
This is inequality
where is the Euclidean norm. (Note: for , is the cosine of the angle between vectors and .) .
Ceiling 
The integer roundup of a value, :
Examples: , , . Also, see floor.
Central path 
A differentiable curve where each point is the analytic center of a polyhedron associated with a linear program. Specifically, suppose we have the primaldual pair:
Let be the primal slack. Then, the primal central path is:
Certainty equivalent 
See stochastic program.
Certificate 
A certificate of optimality is a set of conditions that certify some feasible solution, , is optimal. One example comes from duality: let be feasible in any dual with objective value . Then, is a certificate of optimality. In particular, if we have a linear program:
a certificate that a feasible is optimal is the set of dual conditions:
(Note the general use of duality does not require any convexity assumption, and the dual can be weak.)
A certificate of unboundedness is a collection of conditions that certify a mathematical program is unbounded. An example from duality is that a primal feasible solution is found and the dual is determined to be infeasible. In particular, if we have a linear program:
a certificate that this is unbounded is the existence of a feasible x and the determination that implies a contradiction.
A certificate of infeasibility is a set of conditions that certify a mathematical program is infeasible. One class comes from duality: a dual sequence is found whose objective diverges. In particular, if we have a linear program:
then a certificate that this is infeasible is the existence of a sequence such that for all and as .
Chance constraint 
See stochastic program.
Character of solution 
The idea is to define character by solution status. In linear [integer] programming, columns might be partitioned into
 basic vs. nonbasic at Lower bound vs. nonbasic at Upper bound
 positive in some optimal solution vs. zero in every optimal solution
Rows might be partitioned into
 inactive vs. active
 nonbinding vs. binding
 positive dual price in some optimal solution vs. zero dual price in every optimal solution
Only some rows and columns might be partitioned, and others are free to change. The character is the property we want to fix by the partition. For example, if the LP has activities for operating a plant, we might want to specify the plant operation as a character of the solution, never mind specific levels (or maybe some minimal throughput is specified).
Then, one conducts sensitivity analysis by asking for preservation of the character. Classical LP sensitivity analysis does this by preserving the optimality of the basis; interior point analysis seeks to preserve the optimality of the partition. Both are special cases, and one might want to focus on only a portion of the LP. Here are some properties of interest:
 If the character holds at [c', b'] and at [c", b"], it holds for all [c, b] in their line segment.
 The stability region is a convex set.
This notion of character extends naturally to combinatorial optimization, such as preserving a key portion of a TSP tour or joborder in a schedule.
Characteristic cone 
The Characteristic cone of a polyhedron, is
If , then its characteristic cone is , which is the same as the recession cone.
Chemical equilibrium problem 
The problem is
Chinese postman problem 
A mailman starts from the post office, delivers the mail on his route, covering each street at least once, then returns to the post office. The problem is to find a traversal such that the total distance travelled is minimized. (Some streets are oneway, and the postman is driving.) Abstractly, the problem is to find a leastcost cycle in a graph that uses each edge and arc (forward direction only) at least once. This differs from the Traveling Salesman Problem in that the postman can traverse an edge more than once. A related problem is to determine whether the postman can cover his route within a prescribed distance (while starting and ending at his post office). If so, the cycle is produced. This is computationally equivalent to the minimization problem (in a complexity sense), and is sometimes taken as the problem definition.
Chinese remainder theorem 
Let be relatively prime positive integers, and let be any integers. Then, there exists such that for each . The value of is uniquely determined by and .
Cholesky factorization 
Given an symmetric matrix , a lower triangular matrix, , is obtained, called the Cholesky factor, such that . This is particularly useful for solving linear systems, , by using forward substitution, , then backward substitution, . The original algorithm assumes is positive definite, but it can apply more generally. This is also called Cholesky decomposition.
Chvatal cut 
A cutting plane of the form,
where is a Chvatal function.
Chvatal function 
This class is defined recursively:
 is a Chvatal function (of rank 0) if it is a linear function.
 If and are Chvatal functions, then is a Chvatal function for any nonnegative and , and its rank is . (Some require and to be rational.)
 If is a Chvatal function of rank , then is a Chvatal function, and its rank is if is not equal to .
This arises in integer linear programming in several ways, see Gomory function.
Closed form solution 
This is where one could express a solution (e.g., optimal) by an equation of the
form:where is some function of the parameters, . This is typically meant to suggest that no algorithm is needed, but one must be careful since is not limited in any way, such as being "easy" to evaluate. For example, under mild assumptions, the coordinates of a global maximum of a function, , on the domain , is given by the following integral:
(The idea is to weight "mass" at the global maximum, and it is valid if the set of global maxima is convex.) This is not too useful, and the integral would typically have to be evaluated with a numerical method  i.e., an algorithm.
Closed function 
In convex analysis, this is where its epigraph is a closed set. (If f is concave, it is its hypograph that must be closed.)
Closed map 
Let be a pointtoset map and suppose and are such that for all . Then, is closed at if . The map is closed on a subset of , say , if is closed at each in .
Closed set 
A set is closed if it contains all its limit points, i.e. if and each is in , then is in . By the logic of the definition, an empty set is closed (it is also open). A finite number of intersections of closed sets is a closed set, so the feasibility region of a mathematical program in standard form is closed if each constraint function is continuous. ( can be only upper semicontinuous for the constraint, .) An example of a set that is not closed is: . The point 0 can be approached by a sequence, say , but 0 is not in the set. The closure of a set is the union of the set and all of its limit points.
Closure condition 
The conditinos that the closure of a nonempty strict interior be equal to the level set: . Here is an example where the closure condition fails:
Note that is continuous and quasiconvex. Its level set is , but the strict interior, is , so its closure is only . We lose the flat portion, .
This is important in the use of interior point methods and
in stability. Here
are some functions that satisfy the closure condition:
 explicitly quasiconvex
 positively homogeneous (all degrees positive or all negative)
 monotonic (all strictly increasing or all strictly decreasing)
When equality constraints are present, there are two forms of extension of the closure condition: to consider the relative strict interior, and to consider "feasible sequences". The first generally assumes is affine, so the closure condition becomes:
and
Then, the closure condition is that and are not empty, and
and
Coercive function 
The function is coercive with respect to if there exists a vector such that for any sequence having the property that , we also have
Some people use a different definition, where (i.e., a scalar, realvalued function):
Column generation 
This pertains to solving a linear program whose columns are generated during pricing. Typically, the number of columns is astronomically large, possibly infinite. An example is when solving the randomized program, as with the Generalized Lagrange Multiplier method. In that case, column generation consists of maximizing the Lagrangian. A similar view applies to DantzigWolfe decomposition. From the dual view, this is a cutting plane method since generating a column in the primal corresponds to generating a constraint in the dual.
The concept applies to any mathematical program, and the randomized program model hightlights the duality and how a completely general mathematical program can be considered with (generalized) linear programming.
Combinatorial program 
Let and consider a finite collection of subsets, say . For each subset there is an objective function value, , and the problem is to optimize . Typically, the feasible subsets are represented by inclusion or exclusion of members such that they satisfy certain conditions. This then becomes a special class of integer programs whose decision variables are binary valued: if the ith element is in ; otherwise, . (IP formulations are not always easy, and often there is more than one formulation, some better than others.)
 Here are some examples:
 assignment
 covering, cutting stock
 knapsack
 matching
 packing, partitioning
 routing
 sequencing, scheduling (jobs), shortest path, spanning tree
 traveling salesman
Also see integer program.
Compact formulation 
In integer programming, this refers to having a polynomial number of constraints. For example, look at the travelling salesman formulations. The linear form has an exponential number of "subtour elimination consgtraints," so it is not compact. The
quadratic assignment formulation is compact.Compact set 
This has a general meaning in mathematical analysis. For mathematical programming, where we work in finitedimensional Euclidean vector spaces, it means the set is closed and bounded. This is often an assumption about the feasibility region in order to ensure the existence of an optimum value for a continuous objective function – see Weierstrass' theorem. Any finite set is compact. An open interval is not compact because its endpoints, and are limit points but are excluded from the set.
Compatibility theory 
The idea that a solution's character does not change for a particular perturbation. In linear programming the character could be an optimal basis, and the theory is concerned with whether a particular basis remains optimal when the data is changed in a prescribed direction. A Fundamental Theorem of Basis Compatibility is the following:
is an admissible direction for perturbing if, and only if, it is compatible with some equilibrium basis.
The range of compatiblity of a basis, , for a direction, , is the greatest step for which remains optimal:
The basis spectrum is the greatest range:
Complementarity condition 
Satisfying complementary slackness.
Complementarity problem 
Let . The complementarity problem (CP) is to find such that and . It is complementary because every solution has the property that either or (or both) for each j. The linear complementarity problem (LCP) is when .
The problem generalizes to allow bounds so that . Then, is required to satisfy:

if
if
if .
Complementary slackness 
The condition that two nonnegative vectors are orthogonal. It arises in the KuhnTucker conditions, where the Lagrange multiplier, say , must be orthogonal to the (inequality) constraint functional value: . This means either or for each  that is, if a constraint is not active, its Lagrange multiplier must be zero.
Complementary variables 
If and are the two vectors that satisfy complementary slackness, each pair, and are called complementary variables. In linear programming, primal levels and dual prices are complementary variables.
Complexity 
A measure of computer time or space to solve a problem by an algorithm as a function of the problem's dimensions. Suppose is the time it takes to solve an instance of the problem with dimension . Then, the algorithm has (worstcase) time complexity , if the greatest time it could take to solve an instance of the problem is . When is a polynomial, we say the algorithm has polynomial time complexity. The KleeMinty polytope shows that the elementary simplex method does not have polynomial time complexity.
The average time complexity is the average (rather than worst) time of an algorithm (or class of algorithms) over some class of problems, for which some probability distribution is assumed. Absent a modifier, the complexity of an algorithm is taken to mean its worstcase time complexity.
Whereas complexity refers to the performance of an algorithm, see the notion of NPcompleteness for the related meaning of problem complexity. The standard notation to describe complexity in terms of problem dimensions, say , is . This ``bigO notation means the following: a function, , is if there exists a constant, , and in , such that for all . For example, if an algorithm requires fundamental operations on a problem of size , its time complexity is . See the supplement on complexity for more information.
Complicating variables 
Those variables which, when fixed at particular values, render the remaining mathematical program relatively easy to solve. One example is the use of binary variables for fixed charges in an otherwise linear program. More generally, see Benders' decomposition.
Component 
In a graph (or network), this is a maximal connected subgraph. (The number of components is 1 if the graph is connected.)
Composite concave program 
A global optimization problem whose objective is a composite concave function.
Composite function 
A function of a function, of the form .
Concave function 
The real valued function is concave if its domain, say , is a convex set, and for and :
Condition number 
This is when is nonsingular and is some matrix norm. This arises in convergence analysis, where is the hessian. Whenever is symmetric and positive definite (as the hessian of a strictly convex function), and the matrix norm is that induced by the Euclidean norm (i.e., ), the condition number is the ratio of the extreme values of the eigenvalues. It is often used to measure the convergence rate of an ascent algorithm, due to Kantorovich's inequality. (Unfortunately, it is erroneously used to measure the goodness of an algorithm – see Myths and Counterexamples in Mathematical Programming.)
Cone 
A set, , with the property that if , then , for all positive real . A convex cone is a cone that is also a convex set. Equivalently, is a convex cone if . (An example of a cone that is not convex is the union of the axes.) A polyhedral cone is a cone that is also a polyhedron; equivalently, is a polyhedral cone if there exists a matrix such that . An example of a cone that is not polyhedral is .
A quadratic cone is of the form , where is any (real) matrix. If is negative semidefinite, the cone is all of space. If is positive definite, the cone is just the origin, . So, quadratic cones usually arise when is indefinite. Example: . See each of the following special cones:
Cone of optimality 
Given an LP in standard form, let be an optimal basis (for which the dual is feasible). Then, the cone of optimality of is . It is so called because remains an optimal basis for any in this cone.
Conic program 
The standard form is
where is a cone (not necessarily convex). This is more general than it looks. Suppose is any nonempty set, and we have the mathematical program:
Then, defining , we have that the above problem is equivalent to the conic program:
This problem class also includes the semidefinite program, as could be a matrix.
Conjugate directions 
Direction vectors chosen in an algorithm that are conjugate with previous directions, with respect to the hessian of the objective. For the unconstrained quadratic, (where is symmetric), direction vector is conjugate if for all previous direction vectors, .
Conjugate duality 
See the supplement on particular duals.
Conjugate function 
The convex conjugate of on , denoted on , is the greatest convex approximation from below:
and , i.e. is the effective domain of ). The concave conjugate of on , denoted on , is the least concave approximation from above:
and . This is a foundation for Lagrangian duality, viewed in response space.
Conjugate gradient method 
An ascent method for unconstrained optimization such that successive directions are conjugate with respect to the hessian for a quadratic objective function.
Conjugate vectors 
With respect to a symmetric matrix, , and are conjugate if . (In particular, if , and are conjugate if they are orthogonal.)
Conjunctive normal form 
A logical expression that is written as a conjunction,
where is the conjunction (logical and). Each is called a clause and has the form
where is the disjunction (logical inclusive or) and each is a literal  a proposition or its negation.
Connected network 
In the undirected case, a graph is connected if, for any pair of nodes, there is a path that contains both of them. In a directed graph, or network, the path may be required to be directed (i.e., follow the orientations of the arcs), in which case the network is strongly connected; or, the path may be allowed to ignore the arc orientations, in which case the network is weakly connected.
Consistency 
In general, a logical theory is consistent, if it contains no contradictions.
In constraint programming, a (partial) variable assignment is called consistent with respect to a set of constraints if none of the constraints is violated.
More generally, in CP, different definitions of consistency are used as a basis for inference. See:
Consistent 
Pertains to a system of equations and inequalities, for which there exists a feasible solution.
Constraint 
Usually this is a relation of the form of an inequality, g(x) <= 0, or equation, h(x)=0. More generally, it can be any restriction the decision variables must satisfy. For example, some regard "x must be integervalued" as a constraint, while others would say that this is simply the domain of the decision variables in the mathematical program. There are other relations, such as the logical form of a precedence constraint: IF x=0 THEN y=0.
In constraint programming, a constraint is an arbitrary relation imposed on a set of variables where each variable must be assigned on value from its domain. A constraint holds if all variables in are assigned values from their domain and the relation is satisfied.
Constraint graph 
Constraint graphs are used to conceptualize the set of constraints and their scopes in a CSP.
A primal constraint graph represents variables by nodes and associates an arc with any two nodes in the scope of in the same constraint.
A dual constraint graph represents each constraint with a node and associates a labeled arc with any two nodes whose scopes share variables. The arcs are labeled by the shared variables.
Constraint optimization problem 
The general aim for constraint optimization problem is to find an optimal solution to a set of constraints subject to some objective function obj. For example, consider a CSP together with a function . is a mapping from solutions to (i.e., all elements the set ) to the set of real numbers, .
A solution, , to a constraint optimization problem is a complete assignment of variables that satisfies all constraints and for which the value is optimal (either minimal or maximal, depending on the sense of the optimization).
See also constraint satisfaction problem.
Constraint programming 
Constraint Programming (CP) is a paradigm to solve satisfaction or optimization problems by a twostep approach: propagation and tree search. Given a CSP the decision variablesâ€™ domains are tightened by first propagating the constraints. If after this step all domains contain only one value then a solution has been found. If at least one domain is empty, the CSP has no solutions. Otherwise, the second step, tree search, is initiated: by specifying a variable ordering heuristic and a value ordering heuristic, a search tree is constructed which is then traversed according to one of various search strategies. If the problem is a satisfaction problem, the first solution found is returned or it is proved that no solution exists. For an optimization problem the whole tree is traversed in a branchandbound search and the best solution returned.
Constraint propagation 
Typically, the propagation process is repeated until a "fixed point" is reached where no further values can be removed from the domain of any variable.
Constraint propagator 
A constraint propagator is an algorithm (often associated with a global constraint) which implements a constraint propagation algorithm. Typically, the algorithm prunes the domains of the variables involved in the constraint, removing values for which it can be shown that they participate in no solutions given the current search node.
Constraint propagator is also now as filtering algorithm.
Constraint qualification 
Conditions on the constraint functions ( and ) that are sufficient to make the Lagrange Multiplier Rule valid. Here is an example to illustrate what can go wrong:
Since is the only feasible solution, it is optimal. The Lagrange Multiplier Rule requires that there exist for which , but and , so no such exists. Slater used this example in illustrating his interiority condition. The classical qualification, given by Lagrange's multiplier theorem without inequality constraints, is that have full row rank, which stems from the Implicit Function Theorem. Another constraint qualification is that all constraint functions be affine (even with redundant constraints). Each constraint qualification gives a sufficient condition for the Lagrange Multiplier Theorem to be valid. A constraint qualification is necessary if it must hold in order to guarantee that the Lagrange Multiplier Rule is valid for all smooth having optimal value at .
Constraint satisfaction problem 
A classical Constraint Satisfaction Problem (CSP) is defined by a finite set of variables where each variable is associated with a finite domain of integers . Furthermore, a finite set of constraints is imposed on the variables, where a constraint is an arbitrary relation on a subset of variables . A solution to the CSP is a value assignment to each variable such that the value lies within the respective domain and all constraints are satisfied.
There exist different variants and subclasses of constraint satisfaction problem:
Constraint solver 
A constraint solver typically consists of two parts:
 a propagation engine that prunes the variable domains according to the constraint propagator of each constraint.
 a search engine that branches on different values for the variables according to the selected search strategy.
A solution is found as soon as every variable is assigned a value from its domain and all constraints are satisfied.
A constraint solver can typically search for one solution (returning the first solution it finds, if one exists) or for all solutions.
Continuous program 
A mathematical program with continuous variables. Moreover, the term is commonly used to further imply that the objective and constraint funtions are continuous, an assumption that allows certain analytical results. For example, Weierstrass first proved that continuous functions attain both a maximum and a minimum over a compact set. If the problem contains both continuous and integer variables, then the problem is said to be a mixed integer program.
Contour 
Also called an isoquant of a function, . The projection of its graph into its domain: , where is the contour value.
Contraction map 
Also called a contractor. A function that brings points closer together, forming the foundation for convergence by successive approximation. Mathematically, we have , where is in a normed space, with norm of denoted by . is a contractor if there exists a constant in such that for all and in . See the supplement on fixed points.
Convergence 
Specifically, convergence of an algorithm. The algorithm is represented as the pointtoset map, , where there is some selection function to choose if has more than one member. Convergence means that the sequence, , has a limit point, say , such that satisfies certain conditions. Ideally, the conditions are that is an optimal solution, at least locally, but this is often not the definition used in a convergence theorem. (For example, in unconstrained optimization, most algorithms converge to a stationary point, which need not be even a local optimum.)
Let , where is a scalar series pertaining to the sequence . Typically, , which is sometimes called policy convergence (to a solution, ). We could also have , where is the objective function, in which case the concern is with value convergence to an optimal value, .
Some related terms and concepts:
 Dual convergence. Dual values, notably Lagrange multipliers, converge to a dual solution.
 Geometric convergence. Same as linear convergence, but usually used when the sequence is exactly a geometric progression:  is to the power of
 Global convergence. Usually means the same as globally convergent, but some mean that the algorithm convergences to a global solution.
 Globally convergent. Convergence to a solution from any starting point.
 Linear convergence. Order is 1 and rate is less than 1 (see below).
 Local convergence. Some mean locally convergent, but some mean that the limit point is a local optimum (or just satisfies necessary conditions, see Myths and Counter Examples in Mathematical Programming to avoid misconception).
 Locally convergent. There exists an open neighborhood of 0 such that {s^k}>0 from any s^0 in the neighborhood.
 Order of convergence. The order is
.
The order being 1 is linear convergence and the order being 2 is quadratic convergence. Most (nonfinite) algorithms to solve mathematical programs are between linear and quadratic.
Let and suppose (for notational convenience). The term order derives from the approximate equation, , where is the rate. If this equation were exact, we would have if , and if , for all . If and , we gain one decimal place each time: , , , etc. If , the number of accurate decimal places approximately doubles each iteration: , , , etc. Unfortunately, the underlying equation is not exact – see Myths and Counter Examples in Mathematical Programming to avoid misconception. Some authors call this qorder (or Qorder) convergence to distinguish it from variations of the definition of order. Each definition is designed to capture the notion of how many digits of accuracy are added as the sequence approaches its limit.
 Rate of convergence. This is generally used to mean the limiting ratio: , given the order is .
 Sublinear convergence. Order is 1 and rate is 1 (slower than all linear convergence), e.g., .
 Superlinear convergence. Order is 1 and rate is 0 (faster than all linear convergence), e.g., .
 Stochastic convergence. This applies when the successor point is a random variable, as in simulated annealing. Here are the most common types of convergence for a sequence of random variables, .
 with probability or almost everywhere (abbr., a.e.). .
 in probability. for all .
 in distribution. The sequence of cumulative distribution functions (cdf), converges pointwise: for all at which is continuous, where is the cdf of and is the cdf of .
 in pth mean. . For , this is called convergence in quadratic mean or in mean square.
Convex combination 
An affine combination of vectors whose coefficients are nonnegative, i.e., , where and .
Convex cost flow problem 
The mincost network flow problem with a nonlinear convex cost function.
Convex function 
Let . Then is convex if is a convex set, and for and :
Equivalently, its epigraph is convex.
Convex hull 
(of a set). The intersection of all convex supersets (which can be limited to halfspaces). Equivalently, the set of all convex combinations of points in the set (which can be limited to convex combinations of at most n+1 points, in n dimensions, which is known as Carathéodory's Theorem).
Convex program 
The mathematical program is convex if the objective function is convex, and the feasible region is a convex set. In particular, the mathematical program is convex if is convex, and is affine. See the supplement, Convex Cones, Sets, and Functions.
Here are some key properties of a convex program:
 Every local optimum is a global optimum.
 Firstorder conditions are sufficient.
 Lagrangian duality is strong.
Convex set 
If any two points are in the set, so is their line segment, i.e.
for any . See Myths and Counter Examples in Mathematical Programming.
Convex simplex method 
This extends the general strategy of the simplex method in linear programming to maximize a concave objective over linear constraints of the form and . (A form of local convergence applies when the objective is not concave, but is smooth.) A tableau is maintained, but nonbasic variables need not be at zero level. The partition is used to compute a generalized reduced cost of the form . The vector is determined by the basic portion: (so ).
As in the simplex method, pricing is used to determine whether there is an improving nonbasic vector. If not, the algorithm terminates (satisfying the KuhnTucker conditions); otherwise, a line search is used to determine the new point, changing only the basic variables to compensate for the change in the one nonbasic level. If this causes a basic variable to reach zero, the basis is changed with the pivot operation.
Convexity cut 
A class of cutting planes derived by considering a convex set in a polyhedron, P. The form of the cut is
and it is derived as follows.
Let be an extreme point of a given polyhedron, which contains a given set, . Suppose is a convex set in whose interior contains but does not contain any point of . Let be linearly independent vectors in , and let be such that and for all (e.g., the edges emanating from in ). Define and (i.e., . Then, the cutting plane excludes without excluding any other in for which . The cut, , is equivalent to the above form, , where for some .
One special case is the intersection cut for a 01 integer program:
 ;
 , where are the cuts already added;
 is an extreme point of , obtained by solving the LP relaxation with a simplex method, so is a basic optimum.
 , where is the basis that transforms the original equations to ;
 is a sphere, .
Corner polyhedron problem 
Gomory's relaxed IP that drops the nonnegativity constraints of basic variables. Suppose is a basic solution to the LP, , where is the vector of basic variables. Then, the corner polyhedron problem is
(dropping ), where is the reduced cost of in the LP solution. (Note: for all such that .)
Correlation matrix 
See elliptope.
Cost of captial 
Used in computing as an objective function, it is the interest rate. The units are currency per time period (e.g., dollars per year). One must be careful in computing this. For example, the marginal cost of capital can be greater than the marginal interest rate because the latter applies to the entire borrowing, not just the last dollar borrowed. (This is the "classical" definition, which does not consider advances in option theory.)
Covering problem 
The idea is to select enough members in each of a specified collection of sets; that defines covering the sets. Subject to this, there is a cost for the elements selected, and the objective is to minimize total cost. The IP form of the usual set covering problem is
where if element is selected; else, . The matrix has 0's and 1's, where the th row corresponds to the th set to be covered: means element is in set ; else, . The constraint means that at least one element of each set must be selected. This could be extended to require elements of set be selected by writing . Also see vertex cover.
Cramer rule 
To solve , where is nonsingular, the jth coordinate is given by:
where is the determinant, and is the matrix obtained if column of is replaced by :
Crash 
This is a heuristic to generate an initial point for an algorithm. It is from the early linear programming computer systems that used a variety of heuristics to generate an initial basic solution.
Crisscross method 
Crisscross method. A method in linear programming that chooses a pivot by possibly crossing from primal to dual, and vica versa. The leastindex crisscross method is a finite pivot method that chooses a pivot based on the least index of a column or row for which there is improvement in either the primal or dual infeasibility. It terminates at a basis when one of the following termination conditions is reached:
 Both the associated primal and dual solutions are feasible (implies optimality because complementary slackness holds by construction).
 There is evidence of primal and/or dual infeasibility (like the tests in the simplex method).
Critical path 
A longest path in a network, where length units are time durations. The nodes represent tasks, and the arcs represent precedence constraints. The path is critical because the associated tasks determine the total completion time of the project. Moreover, at least one of their duration times must decrease in order to decrease the total completion time.
Critical point 
A critical point of a smooth function is an element of the domain for which the first partial derivatives are zero or infinite.
Crossover operation 
See genetic algorithm.
Cumulative constraint 
is a global constraint that models scheduling problems as follows. Given are a collection of tasks , such that each task is associated with four variables:
 release time, , is the earliest time at which it can begin executing,
 deadline (or due date), , is the time by which it must complete
 processing time, , is the length of time it takes to complete
 capacity requirement, , is the capacity of the resource that it uses while it executes.
In addition, we are given the capacity variable of the resource.
The constraint assigns a starting time for each task such that:
 : the task starts at or after its release date and end at or before its deadline, and
 : at any time unit , the capacity of the resource is not exceeded.
Cut search 
An algorithm strategy for global optimization (notably for integer programming) that consists of two alternating phases: search and cut. The search phase finds linearly independent vectors emanating from a root point, , to setup a probe for the cut phase. Usually, is an extreme point, so the search is an edge probe phase that extends edges of the cone rooted at until it intersects the candidate solution points. Then, a cutting plane is added (usually a convexity cut) to exclude the root point. (A new root point is obtained by solving the new approximating problem on the smaller polyhedron.)
Cutset 
In a weakly connected network, this is a set of arcs whose removal decomposes the network into exactly two components. Separating node from node , the value of the cutset is the sum of the capacities of its arcs from the component containing to the one containing .
Cutting plane 
A hyperplane whose halfspace cuts off a particular point, such as a noninteger extreme point solution of a linear program relaxation of an integer program. The cut is such that it does not eliminate any solution to the original problem. For example, suppose we want to solve
The only feasible point is . The LP relaxation is
and an optimal solution is at . One cutting plane is . A deeper one is . (When adding a cutting plane to the LP, a new relaxation is obtained, and the solution is closer to the IP solution.)
Cutting stock problem 
Determine a way to cut standard sheets of material into various shapes (like clothes parts) to minimize waste. This is a (linear) integer programming model: patterns are specified, and is the amount of th stock (e.g., sheet or roll of material) used to generate the th output by the th pattern. Then, let be level of th pattern used and . Thus, is the amount of the th stock used, which is limited by its availability: ; and is the amount of th output generated, which is required to be in some range, say (allowing some demand overruns or underruns). Some models seek to minimize the total waste: . Other models consider cost too. The most common problems are 2dimensional (odd shapes from sheets of material); the 1dimensional case is called the trim problem. In the latter case, the stock index is not needed. For example, consider a stock of rolls of paper with a given width, which must be slit into rolls of various widths. Then, is how much of a stock roll is used by the th pattern to generate a roll of the th width. Moreover, is the amount of a stock roll used by pattern to generate a roll of width .
Cyclic descent 
An algorithm that seeks to optimize a multivariate function by optimizing each coordinate with a univariate optimization technique (keeping the other coordinates fixed). This is repeated until a fixed point is reached. In general, it is possible for such a fixed point not to be an optimum (even locally) because a simultaneous change in variables could result in an improvement. An example is given by:
has a minimum at , and has a minimum at . However, decreases with simultaneous change, . Along this parabola, , which is negative for nonzero. Thus, is not a local minimum of . For further discussion about this example, see Myths and Counter Examples.
Cycling 
In linear programming this means revisiting a basis, mainly referring to a simplex method, so that the algorithm would cycle through the same sequence of bases, not converging to an optimal one. See the cycling supplement
Categories: Constraint Programming  Nonlinear Programming  MIP/CO  Common Problems  Nonlinear Programming  Common Problems  Linear Programming  Linear Programming  Linear Programming  Linear Algebra & Geometry  Common Problems  Linear Programming  Nonlinear Programming  Common Problems  MIP/CO  MIP/CO  Linear Algebra & Geometry  MIP/CO  MIP/CO  Nonlinear Programming  Nonlinear Programming  Linear Programming  MIP/CO  Linear Programming  Linear Programming  Linear Programming  Nonlinear Programming  Linear Programming  MIP/CO  Nonlinear Programming  Nonlinear Programming  Linear Algebra & Geometry  Linear Algebra & Geometry  Nonlinear Programming  Nonlinear Programming  Nonlinear Programming  Nonlinear Programming  Nonlinear Programming  Constraint Programming  Constraint Programming  Constraint Programming  Constraint Programming  Constraint Programming  Constraint Programming  Constraint Programming  Nonlinear Programming  Constraint Programming  Constraint Programming  Nonlinear Programming  Nonlinear Programming  Nonlinear Programming  Linear Algebra & Geometry  Nonlinear Programming  Linear Algebra & Geometry  Nonlinear Programming  Linear Algebra & Geometry  Nonlinear Programming  MIP/CO  MIP/CO  Common Problems  MIP/CO  Linear Algebra & Geometry  Linear Programming  Linear Programming  Common Problems  Linear Programming  MIP/CO  Nonlinear Programming  Constraint Programming  MIP/CO  MIP/CO  Common Problems  MIP/CO  Nonlinear Programming  Linear Programming