All Pages

From Glossary

Jump to: navigation, search

Haar condition

See randomized program.

Hadamard inequality

For an LaTeX: n \times n matrix LaTeX: A,

\mbox{det} (A) \le \|A_{:,1} \| ... \|A_{:,n}\| = \prod_{j=1}^n \| A_{:,j} \|,
where LaTeX: A_{:,j} is the j-th column of LaTeX: A and norms are LaTeX: L_2.


A half-line is a collection of points of the form LaTeX: \{t \, {{\bf v}: t \ge 0\}, where LaTeX: {\bf v} is in LaTeX: {\mathbb R}^n such that LaTeX: {\bf v} \neq 0 (some state LaTeX: \|{\bf v}\|=1, without loss in generality). Note the half-line is rooted at the origin -- see ray for translation.


Consider the hyperplane, LaTeX: \{x \in \mathbb{R}^n: a^T x = b\} (where LaTeX: a \in \mathbb{R}^n \backslash {0}). This partitions LaTeX: \mathbb{R}^n into two (closed) half spaces: LaTeX: \{ x: a^Tx \le b\} and LaTeX: \{ x: a^Tx \ge b\}. The two associated open halfspaces are LaTeX: \{ x: a^t x < b\} and LaTeX: \{ x: a^Tx > b\}.

Hard constraint

A constraint is hard if it has to be satisfied in any feasible solution. Compare with soft constraint.

Hausdorff metric

This metric arises in normed spaces, giving a measure of distance between two (point) sets, LaTeX: S and LaTeX: T. Given a norm, let LaTeX: d be the distance function between a point and a set:

LaTeX:  d(x, T) = \inf \{\|x-y\|: y \in T\} and LaTeX:  d(S, y) = \inf \{\|x-y\|: x \in S\}.


D(S, T) = \sup \{d(x, T): x \in S\} and LaTeX:  D(T, S) = \sup \{d(S, y): y \in T\}.

Then, LaTeX: h(S, T) = \max \{D(S, T), D(T, S)\} is the Hausdorff metric. In words, this says that for each LaTeX: x \in S, let LaTeX: y(x) be its closest point in LaTeX: T. Then, maximize this distance among all LaTeX: x.

For example, let LaTeX: S be the interval, LaTeX: [-1, 1], and let LaTeX: T be the interval, LaTeX: [0, 3]. The Hausdorff distance between these two intervals is:

h(S, T) = \max \{D(S, T), D(T, S)\} = \max\{1, 2\} = 2.

Hessenberg matrix

An upper Hessenberg matrix is a square matrix with LaTeX: A_{i, j} = 0 for LaTeX: i > j+1. A lower Hessenberg matrix is one whose transpose is upper Hessenberg.


The matrix of second partial derivatives of a function (assumed to be twice differentiable). This is often denotedLaTeX: H_f(x), where LaTeX: f is the function and LaTeX: x is a point in its domain.


In mathematical programming, this usually means a procedure that seeks an optimal solution but does not guarantee it will find one, even if one exists. It is often used in contrast to an algorithm, so branch and bound would not be considered a heuristic in this sense. In AI, however, a heuristic is an algorithm (with some guarantees) that uses a heuristic function to estimate the "cost" of branching from a given node to a leaf of the search tree. (Also, in AI, the usual rules of node selection in branch and bound can be determined by the choice of heuristic function: best-first, breadth-first, or depth-first search.)

Heuristic function

A heuristic function is used to guide a heuristic search, much lika a human would do. In this sense, it is a rule of thumb. Formally, in AI, a heuristic function is defined on the state of a search tree and is used in the evaluation of nodes for expansion in what is sometimes called a best-first directed tree search strategy.

Heuristic search

In mathematical programming, this is any (purposeful) search procedure to seek a solution to a global optimization problem, notably to combinatorial programs. In AI, this is a (partially) informed search (vs. blind search), using a heuristic function for guidance.

Two types of (blind) search methods are:

  1. Breadth-first: expanding a node of least depth;
  2. Depth-first: expanding a node of greatest depth (requiring backtracking).
(See branch and bound.)

A specific class of local heuristic search algorithms is the greedy algorithm. Another type is the 1-Opt for the TSP.

Here are heuristic search strategies that are based on some biological metaphor:

  1. Ant colony optimization, based on how ants solve problems;
  2. Genetic algorithm, based on genetics and evolution;
  3. Neural networks, based on how the brain functions;
  4. Simulated annealing, based on thermodynamics;
  5. Tabu search, based on memory-response;
  6. Target analysis, based on learning.

Hirsch conjecture

If the basis of one basic feasible solution differs from another by k columns, there exists k feasible pivots to go from the first to the second. This was eventually shown to be false for general polyhedra and is an open problem for bounded polyhedra. It was posed by Warren M. Hirsch in 1957.

Holder inequality

Holder's inequality states

LaTeX:  \|x y\|_1 \le \|x\|_p \|y\|_q ,

where LaTeX: \|\cdot\|_k denotes the LaTeX: L_k norm, LaTeX: p > 1, and LaTeX: 1/p + 1/q = 1. Equality holds if and only if LaTeX: |x|^p = a|y|^q for some scalar LaTeX: a. (Note: this is the Cauchy-Schwartz inequality if LaTeX: p = q = 2.)

Homogeneous function

A homogeneous function of degree p satisfies

LaTeX: f(ax) = (a^p)f(x) for all LaTeX: a.

It is positively homogeneous if we restrict LaTeX: a > 0. When the degree is not specified (even by context), it is generally assumed to be 1. For example, LaTeX: x is homogeneous, LaTeX: xy + x^2 is homogeneous of degree 2, LaTeX: x + x^2 is not homogeneous, and LaTeX: xy/(x+y) is positively homogeneous on the positive elements of LaTeX: {\mathbb R}^2.


A continuous transformation from one mapping to another. One example is the convex combination: LaTeX: t f(x) + (1-t)F(x). As LaTeX: t varies from 0 to 1, the function changes continuously from LaTeX: F to LaTeX: f.

Hungarian method

An algorithm to solve the standard assignment problem (presented by H. Kuhn in 1955). Let C be the LaTeX: n\times n cost matrix.

  • Step 0 (initialize). Subtract the least element of each row from that row of LaTeX: C. Then, do likewise for each column. The resulting matrix, LaTeX: C^0 has a zero in every row and column. (Further, a least cost assignment for LaTeX: C^0 is also a least cost assignment for LaTeX: C.) Re-define LaTeX: C=C^0.
  • Step 1 (cover zeros). Draw the minimum number of lines through the rows and columns to cover all zeros in LaTeX: C. If that minimum is LaTeX: n, you can assign LaTeX: i to LaTeX: j such that LaTeX: C_{ij}=0; then you can remove row LaTeX: i and column LaTeX: j, repeating the process to obtain an optimal assignment. Otherwise, if that minimum is greater than LaTeX: n, continue.
  • Step 2 (revise LaTeX: C). Select the minimum uncovered element. Subtract this from each uncovered element and add it to each twice-covered element (i.e., to those with both a horizontal and vertical line intersecting). Return to step 1.


               _ 1  2  3  4  5 _ 
              |                 | 
              |  1  2  3  4  7  | 1
              |  7  4  2  6  5  | 2
          C = |  2  4  3  1  4  | 3
              |  2  3  1  2  3  | 4
              |  5  4  3  2  4  | 5
              |_               _| 

After subtracting min row elements:

               _ 1  2  3  4  5 _ 
              |                 | 
              |  0  1  2  3  6  | 1
              |  5  2  0  4  3  | 2
              |  1  3  2  0  3  | 3
              |  1  2  0  1  2  | 4
              |  3  2  1  0  2  | 5
              |_               _| 

After subtracting the minimum column elements:

               _ 1  2  3  4  5 _ 
              |        |  |  |  | 
              |--0--0--2--3--5--| 1
              |  5  1  0  4  1  | 2
              |  1  2  2  0  1  | 3
              |  1  1  0  1  0  | 4
              |  3  1  1  0  0  | 5
              |_       |  |  | _| 

The minimum number of lines to cover all zeros is 4: a horizontal line through row 1 and vertical lines through columns 3, 4 and 5. The minimum uncovered element is 1. Subtract from each uncovered element and add to each twice covered element (cells (1,3), (1,4) and (1,5)):

               _ 1  2  3  4  5 _ 
              |                 | 
              |  0* 0  4  5  7  | 1
              |  4  0* 0  4  1  | 2
              |  0  2  2  0* 1  | 3
              |  0  0  0* 1  0  | 4
              |  2  0  1  0  0* | 5
              |_               _| 

The minimum cover is now 5, by drawing a line through each column. A minimum assignment is indicated by *. For example, we assign person 1 to job 1, person 2 to job 2, person 3 to job 4, person 4 to job 3, and person 5 to job 5. (There are alternative optimal assignments.)


A set of nodes (or vertices), say V, plus a set of edges, say E, such that each member of E is a subset of V. When each member of E has exactly 2 nodes, [V,E] is a graph. The hypergraph is a convenient mathematical way to describe relations that involve more than two objects (nodes).

Specific cases:

  • An IIS hypergraph: each node represents an inequality and each edge represents an IIS.
  • A common graphical representation of non-binary constraint networks is as a hypergraph, where nodes represent variables and edges group variables together that belong to the same scope.


An affine set of dimension LaTeX: (n-1), i.e. the set LaTeX:  \{x: a^T x = b\}, where LaTeX: a is a nonzero vector called the normal of the hyperplane.


Abbreviated LaTeX: \mbox{hypo}\{f,X\}. The hypograph of LaTeX: f over LaTeX: X is LaTeX: \{(x, z): x \in X, \;z \le f(x)\}, which is the region on or below the graph of LaTeX: f.

Personal tools