All Pages

From Glossary

Jump to: navigation, search

N-Opt

This is a heuristic for the TSP, though it can apply more generally to a problem that seeks a subgraph with all nodes. It is a local search, which considers a move by removing n edges from a trial solution, then replacing them with some other n edges. In TSP, the 2-Opt neighbors of the tour LaTeX: \textstyle (1,2,\dots,n) are pair-wise exchanges. The set of exchanges, however, is larger than the set of 2-Opt neighbors. For example, consider LaTeX: n=4, so the original tour is LaTeX: \textstyle (1,2,3,4). The six pair-wise exchanges of this permutation are:

      (2,1,3,4):  (1)*--(2)          (3,2,1,4):  (1)*--(2)
                    \  *                          |     *
                     \/                           |     |
                     /\                           |     |
                    /  *                          *     |
                  (4)*--(3)                      (4)--*(3)

      (4,2,3,1):  (1)   (2)          (1,3,2,4):  (1)   (2)
                   |*  * |                        |*  * |
                   | \/  |                        | \/  |
                   | /\  |                        | /\  |
                   */  \ *                        */  \ *
                  (4)   (3)                      (4)   (3)

      (1,4,3,2):  (1)*--(2)          (1,2,4,3):  (1)--*(2)
                   |     *                         *  /  
                   |     |                          \/   
                   |     |                          /\   
                   *     |                         *  \  
                  (4)--*(3)                      (4)--*(3) 

We see duplicates due to the equivalence of any cyclic permutation. Also, there are only two 2-Opt moves if the TSP is symmetric:

                  (1)---(2)                      (1)   (2)
                    \  /                          |\  / |
                     \/                           | \/  |
                     /\                           | /\  |
                    /  \                          |/  \ |
                  (4)---(3)                      (4)   (3) 

For LaTeX: n=2, the replacement is unique - that is, once two edges are chosen for removal, there is only one replacement. This is not true for LaTeX: n > 2, and part of the heuristic is to decide what replacement to make.


NP-complete

Problems are divided into two categories: those for which there exists an algorithm to solve it with polynomial time complexity, and those for which there is no such algorithm. We denote the former class of problems by LaTeX: P. There are problems for which no known algorithm exists that solves it in polynomial time, but there is also no proof that no such algorithm exists. Among these problems that are not known to be in LaTeX: P (or in LaTeX: ~P), there is a subclass of problems known as NP-complete: those for which either all are solvable in polynomial time, or none are. Formally, a problem is NP if there exists an algorithm with polynomial time complexity that can certify a solution. For example, it is not known whether there exists a polynomial algorithm to solve a system of Diophantine equations, LaTeX: \textstyle Ax=b \mbox{ for } x \in \mathbb{Z}^n (integer n-vectors). However, we can certify any trial LaTeX: x in polynomial time, just by checking that it is in LaTeX: \textstyle \mathbb{Z}^n, then multiplying by LaTeX: A to compare with LaTeX: b. A problem, LaTeX: p, is NP-complete if it is NP and for any problem in NP, there exists a polynomial time algorithm to reduce it to LaTeX: p. A fundamental member of the NP-complete class is the satisfiability problem. It is an open question whether LaTeX: P=\mbox{NP}, and most regard the NP-complete problems as having exponential time complexity. Further details are in the supplement, Introduction to Computational Complexity.


NP-hard

An optimization problem that relies upon the solution of an NP-complete problem. In that sense, NP-hard problems are at least as hard as NP-complete problems.


Near optimal

A point LaTeX: x that is within a small value, say LaTeX: e > 0, of optimality in some sense. The following are the common measures of nearness:

  • in value: LaTeX: x feasible and LaTeX: 
f(x) \ge f(x^*) - e.
  • in policy: LaTeX: \textstyle ||x - x^*|| \le e.
  • in resource level: LaTeX: 
x \in \argmax \left \{f(y): y \in X, g(y) \le b, h(x) = c \right \},
where LaTeX: \textstyle ||(b, c)|| \le e.


Nearest neighbor algorithm

A type of greedy algorithm for combinatorial programs where there is measure of nearness between neighbors. An example is the traveling salesman problem, where the procedure is to choose the next city to be one that is nearest the current city in the sequence.


Negative definite matrix

LaTeX: x^TAx < 0 for all nonzero LaTeX: x.


Negative semi-definite matrix

LaTeX: x^TAx \le 0 for all LaTeX: x.


Neighborhood

For a normed space, the neighborhood of a point, LaTeX: x, is the open ball centered at LaTeX: \textstyle x: \left \{y: ||y-x|| < e \right \}, where LaTeX: e > 0. The closed ball, with LaTeX: \textstyle ||y-x|| \le e, is also a neighborhood. The usual notation (in this glossary) is LaTeX: N_e(x), and context dictates whether it is open or closed. This extends to the neighborhood of a set LaTeX: S by taking the union; equivalently, LaTeX: \textstyle N_e[S] = \left \{y: ||y-x|| \le e \mbox{ for some } x \in S \right \}.

In integer programming, the neighborhood could mean integer-valued neighbors of the form LaTeX: \textstyle |x-x^*| \le 1. In a combinatorial program, where variables are binary, integer-valued neighbors comprise all members of LaTeX: \textstyle \{0,1\}^n. In this case the neighborhood is defined relative to some subset of binary variables reachable by some operation that depends on the problem. This is what is generally meant by a neighborhood in heuristic search in general, and simulated annealing or tabu search in particular, where a move is defined in terms of neighbors. In that context, a neighborhood could be as simple as complementing the value of one variable, as deletions or additions in a knapsack problem, or it could be more complex, like a [[n-Opt|]]-Opt neighbor of a TSP tour.


Nelder-Mead simplex method

This is a heuristic search (not guaranteed to find a solution) for unconstrained optimization. Let LaTeX: \textstyle \left \{x^0, x^1, \dots, x^n \right \} be LaTeX: n+1 points in LaTeX: \mathbb{R}^n that define a simplex. Define the extreme values: LaTeX: \textstyle f(x^u) = \max \left \{f(x^0), \dots, f(x^n) \right \} and LaTeX: \textstyle f(x^l) = \min \left \{f(x^0), \dots, f(x^n) \right \}. Seeking a maximum of LaTeX: \textstyle f \in \mathbb{R}^n, the idea is to replace LaTeX: x^l with a point having a better objective value.


    Here is an iteration:
  1. Define the centroid of the simplex without this point of least value: LaTeX: \textstyle c = \sum_i \left \{x^i: i \ne l \right \}/n.
  2. reflection: compute LaTeX: \textstyle r = c + a(c - x^l), where LaTeX: a > 0 (LaTeX: a is called the "reflection coefficient").
  3. expansion: If LaTeX: \textstyle f(r) > f(x^l) (i.e., we have a better point), compute LaTeX: \textstyle x = c + b(c - x^l), where LaTeX: b > 1 (LaTeX: b is called the "expansion coefficient"). If LaTeX: \textstyle f(x) > f(r), x replaces LaTeX: x^l, and the iteration is complete. Otherwise, LaTeX: r replaces LaTeX: x^l, and the iteration is complete.
  4. At this step, the reflection point LaTeX: (r) is not better than the worst of the simplex vertices (i.e., LaTeX: \textstyle f(r) \le f(x^l)). This is divided into the following cases.
    1. If LaTeX: \textstyle \min \left \{f(x^i): i \ne l \right \} \le f(r) \le f(x^u), replace LaTeX: x^l with LaTeX: r.
    2. If LaTeX: \textstyle \min \left \{f(x^i): i \ne l \right \} > f(r), define LaTeX: x^* as the better of the two: LaTeX: \textstyle f(x^*) = \max \left \{f(x^l), f(r) \right \} (with LaTeX: \textstyle x^* = x^l \mbox{ or } r, resp.). Then, take a contraction step: LaTeX: \textstyle x = c + d(x^* - c), where LaTeX: 0 < d < 1 (LaTeX: d is called the "contraction coefficient"). If LaTeX: \textstyle f(x) > f(x^*), x replaces LaTeX: x^l, and the iteration is complete. Otherwise, the last resort is to replace all LaTeX: x^i with: LaTeX: \textstyle x'^i = (x^i + x^u)/2.


Network

A collection of nodes, V, sometimes called vertices, plus a collection of arcs, A, which are directed from one node to another. The two sets form a network, denoted LaTeX: N=[V,A]. As such, it can be considered a directed graph (see other terms, like special graphs).

  • Here are associated functions and data values:
  • tail of k-th arc LaTeX: (i, j) is node LaTeX: i; we sometimes write LaTeX: T(k).
  • head of k-th arc LaTeX: (i, j) is node LaTeX: j; we sometimes write LaTeX: H(k).
  • in-degree of node LaTeX: i is LaTeX: \textstyle |\left\{k: H(k)=i\right\}|.
  • out-degree of node LaTeX: i is LaTeX: \textstyle |\left\{k: T(k)=i\right\}|.
  • arc capacity limits the total flow across the arc at any one time.
  • node capacity limits the total flow through a node at any one time.
  • supply or demand at a node provides external input or an output requirement.

Network flows

This is an assignment of arc values, called flows, say LaTeX: f(k) for the k-th arc, that satisfy two types of constraints: (1) arc bounds, LaTeX: \textstyle L \le f \le U, and (2) node balances, LaTeX: \textstyle \mbox{Flow out of node } i - \mbox{ Flow into node } i = b(i). The flow out of node LaTeX: i can be expressed as LaTeX: \textstyle \sum_k \left \{f(k): T(k)=i \right \}, and the flow into node LaTeX: i can be expressed as LaTeX: \textstyle \sum_k \left \{f(k): H(k)=i \right \}.

If LaTeX: b(i) < 0, -b(i) is a supply at node LaTeX: i (called a supply node); if LaTeX: b(i) > 0, b(i) is a demand at node LaTeX: i (called a demand node). If LaTeX: b(i)=0, node LaTeX: i is simply a transshipment node, and the balance equation says that the flow into node LaTeX: i must equal the flow out of node LaTeX: i. Another way to express the node flow balance equations is with the node-arc incidence matrix: LaTeX: Mf=b.

Still another representation is to define flow variables, LaTeX: x(i, j) on LaTeX: \textstyle V \times V, which describes how much flow goes from node LaTeX: i to node LaTeX: j. This can be used as long as there are no parallel arcs - i.e., no two arcs have both the same tail and the same head. (In some applications, parallel arcs are needed, such as to increase capacity across a pair of arcs with an increased cost.) In this form, the capacity constraints are still of the form LaTeX: \textstyle L \le x \le U, but the node equations have a different form:


LaTeX: 
\sum_j \left \{x(i, j): (i, j) \in A \right \} - \sum_j \left \{x(j, i): (j, i) \in A \right \} = b(i) \mbox{ for all } i \in V.


The (linear) min cost network flow problem is to minimize total cost, LaTeX: \textstyle \sum_{ij} \left \{c(i,j)x(i,j): (i,j) \in A \right \}, where LaTeX: c is a unit cost of flow, subject to the flow bounds and balance equations.


Neural network

(also called artificial neural network, abbr. ANN). A network where the nodes correspond to neurons and the arcs correspond to synaptic connections in the biological metaphor. The following properties are included:


neural state. Each node has a state variable, say x. In the brain, this could be the potassium level; in computing applications, it could be anything the modeler chooses.
arc weight. Each arc has a weight that affects the state of its neighboring nodes when firing. If the weight is positive, it said to be excitatory; if it is negative, it is inhibitory.
state equations. The neural states change by some differential (or difference) equation, say LaTeX: \textstyle x' = F(x,w,t). Typically (but not necessarily), LaTeX: -F is the gradient of an energy function (in keeping with the biological metaphor), say LaTeX: \textstyle F(x,w,t) = -\nabla x[E(x,w,t)], so that LaTeX: x(t) follows a path of steepest descent towards a minimum energy state.
learning mechanism. This could be equations to change the weights: LaTeX: \textstyle w' = L(x,w,t). Various learning mechanisms are represented this way, including a form of supervised learning that uses a training set to provide feedback on errors. Other elements can be learned besides the arc weights, including the topology of the network.


New term

Hello test, Freeman

Newsboy problem

A newspaper is concerned with controlling the number of papers to be distributed to newstands. The cost of a paper varies (i.e., Sunday vs. daily), and the demand is a random variable, LaTeX: q, with probability function LaTeX: P(q). Unsold papers are returned, with no salvage value the next day, losing millions of dollars in the production cost. The demand for newspapers is a random variable, with probability function LaTeX: P(q) = probability that demand equals LaTeX: q. It is possible, however, for a newstand to order more papers the same day. There are holding and shortage (penalty) costs. The problem is to determine a reorder policy so as to minimize total expected cost. This problem was used to consider a reorder policy with a 2-parameter decision rule:

  1. LaTeX: s = inventory level at which an order is placed;
  2. LaTeX: S = inventory level to which to order.

Then, the decision rule to be employed is the following policy:

Order nothing if the inventory of papers is LaTeX: \ge s;
Order LaTeX: S-s if there are s papers on hand and LaTeX: s < S.

The significance of this problem is that it was used to introduce the notion (and optimality) of an LaTeX: (s, S) policy in inventory theory.


Newton method

(also called Newton-Raphson method). This is the iterative scheme to find a zero of a function:


LaTeX: 
x' = x - J(x)^{-1} F(x),


where LaTeX: \textstyle F:\mathbb{R}^n \to \mathbb{R}^n LaTeX: \textstyle (F \in C^1) and LaTeX: J(x) is the jacobian of LaTeX: F (assumed nonsingular). In mathematical programming this is used to find a stationary point, where LaTeX: F=\nabla f and LaTeX: J=H_f. Lacking global convergence, this leads to the modified Newton method, sometimes called the damped Newton method. (See the associated myth, [[./myths/myth-NLP.html#13|Myth NLP-13]] to avoid misconception.)


No-free-lunch theorem

In heuristic search, this is the proposition that all methods perform the same when averaged over all possible objective functions. The idea is that a particular search algorithm, like simulated annealing, may be designed to perform especially well for some functions, compared to a genetic algorithm, but when applied to a representative sample of all possible costs, they will perform exactly the same, on the average. This implies that to do especially well on some problems, the search method must do worse on others; hence, the "no-free-lunch" description.


No-good

A partial assignment of variables to values that does not lead to a solution. In other words, there does not exist a solution to the overall problem that satisfies all the assignments in the no good.

In a backtracking search to find a solution, each dead-end corresponds to a no good. However, where no-goods become useful is when they can be learned and added to the constraint problem as implied constraints that remove many dead-ends that would otherwise have to be searched over.

In particular, no-good learning and reasoning are very important for modern techniques to solve SAT problems.


Node consistency

A simple consistency property concerning unary constraints: a variable is node consistent if all values within its domain are consistent with the all unary constraints on the variable.


Node packing problem

See Packing problem.


Nonbasic

A variable or column that is not basic.


Nondegeneracy

A nondegenerate solution is one that is not degenerate. A nondegeneracy assumption is one that ensures every optimal solution is not degenerate. In general, a nondegenerate solution is strictly complementary. In linear programming, nondegeneracy is equivalent to uniqueness in both the primal and the dual.


Nonlinear program

At least one of the functions (objective or constraint) is not affine on LaTeX: X. (Usually, it is the rule of the function that classifies it as nonlinear. In particular, a linear integer program is not generally considered to be a nonlinear problem (NLP).


Norm

This is a function of a vector, say LaTeX: ||x||, that satisfies three properties:

  1. Homogeneous: LaTeX: \textstyle ||tx|| = |t| ||x|| \mbox{ for all (scalars)}, t.
  2. Positive: LaTeX: \textstyle ||x|| > 0 \mbox{ for } x \ne 0. (Note: LaTeX: ||0|| = 0 by homogeneity, so LaTeX: 0 is the unique vector with zero norm.)
  3. Subadditive: LaTeX: \textstyle ||x + y|| \le ||x|| + ||y||.

Norms that arise frequently in mathematical programming are:

Euclidean norm (on LaTeX: \mathbb{R}^n): LaTeX: \textstyle ||x|| = \sqrt{\sum_j x_j^2}

L_inf (on LaTeX: \mathbb{R}^n): LaTeX: \textstyle ||x|| = \max_j\{|x_j|\} (= \lim L_p \mbox{ as } p \to \infty)

L_p LaTeX: \textstyle (\mbox{on } \mathbb{R}^n, \mbox{ for } p \ge 1): ||x|| = [\sum_j |x_j|^p]^{1/p}

Matrix norm (induced by vector norm): LaTeX: \textstyle ||A|| = \max \left \{||Ax||: ||x||=1 \right \}

Supremum norm (on function space): LaTeX: \textstyle ||F|| = \sup \left \{|F(x)|: x \in X \right \}


Normal cone

For a convex set LaTeX: S, the normal cone to LaTeX: S at LaTeX: x, for LaTeX: \textstyle x \in S, is LaTeX: \textstyle \left \{y : <v - x, y> \le 0 \mbox{ for all } v \in S \right \}, where LaTeX: < * > is an inner product.


Northwest corner rule

This constructs a feasible shipment for the transportation problem, from the cost table (LaTeX: c_{ij} = unit cost from source LaTeX: i to destination LaTeX: j), as follows. Start at the NW corner (source 1, destination 1), and allocate the most possible: the min of the supply and demand. If supply exceeds the demand, proceed to the next destination, and continue until all of supply 1 is allocated. Then, go to source 2 and repeat the allocation process, starting with the first (lowest index) destination whose demand has not been fulfilled. If demand exceeds the supply, proceed to the next source, and continue until all of demand 1 is allocated. Then, go to destination 2 and repeat the allocation process. Eventually, all demand must be fulfilled if the problem is feasible (i.e., total supply LaTeX: \ge total demand).


Null space

Null space of matrix LaTeX: A. LaTeX: \textstyle \left \{x: Ax=0 \right \}. It is the orthogonal complement of its row space, LaTeX: \textstyle \left \{x: x=yA \mbox{ for some } y \in \mathbb{R}^m \right \}. Its dimension is LaTeX: n - \mbox{rank}(A), which complements the subspace spanned by its row space.


Numeric CSP

A constraint satisfaction problem is numeric if the constraints are numeric relations (e.g., log, sin, power, etc.) and the decision variables' domains are continuous.


Views
Personal tools