ClassesClassesClassesClasses | | | | Operators

evaluate_class_gmmT_evaluate_class_gmmEvaluateClassGmmevaluate_class_gmmEvaluateClassGmmEvaluateClassGmm (Operator)


evaluate_class_gmmT_evaluate_class_gmmEvaluateClassGmmevaluate_class_gmmEvaluateClassGmmEvaluateClassGmm — Evaluate a feature vector by a Gaussian Mixture Model.


evaluate_class_gmm( : : GMMHandle, Features : ClassProb, Density, KSigmaProb)

Herror T_evaluate_class_gmm(const Htuple GMMHandle, const Htuple Features, Htuple* ClassProb, Htuple* Density, Htuple* KSigmaProb)

Herror evaluate_class_gmm(const HTuple& GMMHandle, const HTuple& Features, HTuple* ClassProb, HTuple* Density, HTuple* KSigmaProb)

HTuple HClassGmm::EvaluateClassGmm(const HTuple& Features, HTuple* Density, HTuple* KSigmaProb) const

void EvaluateClassGmm(const HTuple& GMMHandle, const HTuple& Features, HTuple* ClassProb, HTuple* Density, HTuple* KSigmaProb)

HTuple HClassGmm::EvaluateClassGmm(const HTuple& Features, double* Density, double* KSigmaProb) const

void HOperatorSetX.EvaluateClassGmm(
[in] VARIANT GMMHandle, [in] VARIANT Features, [out] VARIANT* ClassProb, [out] VARIANT* Density, [out] VARIANT* KSigmaProb)

VARIANT HClassGmmX.EvaluateClassGmm(
[in] VARIANT Features, [out] double* Density, [out] double* KSigmaProb)

static void HOperatorSet.EvaluateClassGmm(HTuple GMMHandle, HTuple features, out HTuple classProb, out HTuple density, out HTuple KSigmaProb)

HTuple HClassGmm.EvaluateClassGmm(HTuple features, out double density, out double KSigmaProb)


evaluate_class_gmmevaluate_class_gmmEvaluateClassGmmevaluate_class_gmmEvaluateClassGmmEvaluateClassGmm computes three different probability values for a feature vector FeaturesFeaturesFeaturesFeaturesFeaturesfeatures with the Gaussian Mixture Model (GMM) GMMHandleGMMHandleGMMHandleGMMHandleGMMHandleGMMHandle.

The a-posteriori probability of class i for the sample FeaturesFeaturesFeaturesFeaturesFeaturesfeatures(x) is computed as

p(i|x) =   /      P(j) p(x|j)

and returned for each class in ClassProbClassProbClassProbClassProbClassProbclassProb. The formulas for the calculation of the center density function p(x|j) are described with create_class_gmmcreate_class_gmmCreateClassGmmcreate_class_gmmCreateClassGmmCreateClassGmm.

The probability density of the feature vector is computed as a sum of the posterior class probabilities

p(x) =   /      Pr(i) p(i|x)

and is returned in DensityDensityDensityDensityDensitydensity. Here, Pr(i) are the prior classes probabilities as computed by train_class_gmmtrain_class_gmmTrainClassGmmtrain_class_gmmTrainClassGmmTrainClassGmm. DensityDensityDensityDensityDensitydensity can be used for novelty detection, i.e., to reject feature vectors that do not belong to any of the trained classes. However, since DensityDensityDensityDensityDensitydensity depends on the scaling of the feature vectors and since DensityDensityDensityDensityDensitydensity is a probability density, and consequently does not need to lie between 0 and 1, the novelty detection can typically be performed more easily with KSigmaProbKSigmaProbKSigmaProbKSigmaProbKSigmaProbKSigmaProb (see below).

A k-sigma error ellipsoid is defined as a locus of points for which

( x - m )^T  C^(-1) ( x - m )  = k^2

In the one dimensional case this is the interval [mu-k*sigma, mu+k*sigma]. For any 1D Gaussian distribution, it is true that approximately 65% of the occurrences of the random variable are within this range for k=1, approximately 95% for k=2, approximately 99% for k=3, etc. Hence, the probability that a Gaussian distribution will generate a random variable outside this range is approximately 35%, 5%, and 1%, respectively. This probability is called k-sigma probability and is denoted by P[k]. P[k] can be computed numerically for univariate as well as for multivariate Gaussian distributions, where it should be noted that for the same values of k, P^(N)[k] > P^(N+1)[k] (here N and (N+1) denote dimensions). For Gaussian mixture models the k-sigma probability is computed as:

P_gmm[x] =  /    P(j)P_j[k_j],

where  k_j^2 = ( x - m_j )^T  C_j^(-1) ( x - m_j )

They are weighted with the class priors and then normalized. The maximum value of all classes is returned in KSigmaProbKSigmaProbKSigmaProbKSigmaProbKSigmaProbKSigmaProb, such that

KSigmaProb  = ------ * max( Pr(i) * P_gmm[x])

KSigmaProbKSigmaProbKSigmaProbKSigmaProbKSigmaProbKSigmaProb can be used for novelty detection. Typically, feature vectors having values below 0.0001 should be rejected. Note that the rejection threshold defined by the parameter RejectionThresholdRejectionThresholdRejectionThresholdRejectionThresholdRejectionThresholdrejectionThreshold in classify_image_class_gmmclassify_image_class_gmmClassifyImageClassGmmclassify_image_class_gmmClassifyImageClassGmmClassifyImageClassGmm refers to the KSigmaProbKSigmaProbKSigmaProbKSigmaProbKSigmaProbKSigmaProb values.

Before calling evaluate_class_gmmevaluate_class_gmmEvaluateClassGmmevaluate_class_gmmEvaluateClassGmmEvaluateClassGmm, the GMM must be trained with train_class_gmmtrain_class_gmmTrainClassGmmtrain_class_gmmTrainClassGmmTrainClassGmm.

The position of the maximum value of ClassProbClassProbClassProbClassProbClassProbclassProb is usually interpreted as the class of the feature vector and the corresponding value as the probability of the class. In this case, classify_class_gmmclassify_class_gmmClassifyClassGmmclassify_class_gmmClassifyClassGmmClassifyClassGmm should be used instead of evaluate_class_gmmevaluate_class_gmmEvaluateClassGmmevaluate_class_gmmEvaluateClassGmmEvaluateClassGmm, because classify_class_gmmclassify_class_gmmClassifyClassGmmclassify_class_gmmClassifyClassGmmClassifyClassGmm directly returns the class and corresponding probability.



GMMHandleGMMHandleGMMHandleGMMHandleGMMHandleGMMHandle (input_control)  class_gmm HClassGmm, HTupleHTupleHClassGmm, HTupleHClassGmmX, VARIANTHtuple (integer) (IntPtr) (Hlong) (Hlong) (Hlong) (Hlong)

GMM handle.

FeaturesFeaturesFeaturesFeaturesFeaturesfeatures (input_control)  real-array HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

Feature vector.

ClassProbClassProbClassProbClassProbClassProbclassProb (output_control)  real-array HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

A-posteriori probability of the classes.

DensityDensityDensityDensityDensitydensity (output_control)  real HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

Probability density of the feature vector.

KSigmaProbKSigmaProbKSigmaProbKSigmaProbKSigmaProbKSigmaProb (output_control)  real HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

Normalized k-sigma-probability for the feature vector.


If the parameters are valid, the operator evaluate_class_gmmevaluate_class_gmmEvaluateClassGmmevaluate_class_gmmEvaluateClassGmmEvaluateClassGmm returns the value 2 (H_MSG_TRUE). If necessary an exception is raised.

Possible Predecessors

train_class_gmmtrain_class_gmmTrainClassGmmtrain_class_gmmTrainClassGmmTrainClassGmm, read_class_gmmread_class_gmmReadClassGmmread_class_gmmReadClassGmmReadClassGmm



See also



Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 3; March 2002.



ClassesClassesClassesClasses | | | | Operators