train_class_gmmT_train_class_gmmTrainClassGmmTrainClassGmmtrain_class_gmm (Operator)

Name

train_class_gmmT_train_class_gmmTrainClassGmmTrainClassGmmtrain_class_gmm — Train a Gaussian Mixture Model.

Signature

train_class_gmm( : : GMMHandle, MaxIter, Threshold, ClassPriors, Regularize : Centers, Iter)

Herror T_train_class_gmm(const Htuple GMMHandle, const Htuple MaxIter, const Htuple Threshold, const Htuple ClassPriors, const Htuple Regularize, Htuple* Centers, Htuple* Iter)

void TrainClassGmm(const HTuple& GMMHandle, const HTuple& MaxIter, const HTuple& Threshold, const HTuple& ClassPriors, const HTuple& Regularize, HTuple* Centers, HTuple* Iter)

HTuple HClassGmm::TrainClassGmm(Hlong MaxIter, double Threshold, const HString& ClassPriors, double Regularize, HTuple* Iter) const

HTuple HClassGmm::TrainClassGmm(Hlong MaxIter, double Threshold, const char* ClassPriors, double Regularize, HTuple* Iter) const

HTuple HClassGmm::TrainClassGmm(Hlong MaxIter, double Threshold, const wchar_t* ClassPriors, double Regularize, HTuple* Iter) const   (Windows only)

static void HOperatorSet.TrainClassGmm(HTuple GMMHandle, HTuple maxIter, HTuple threshold, HTuple classPriors, HTuple regularize, out HTuple centers, out HTuple iter)

HTuple HClassGmm.TrainClassGmm(int maxIter, double threshold, string classPriors, double regularize, out HTuple iter)

def train_class_gmm(gmmhandle: HHandle, max_iter: int, threshold: float, class_priors: str, regularize: float) -> Tuple[Sequence[int], Sequence[int]]

Description

train_class_gmmtrain_class_gmmTrainClassGmmTrainClassGmmTrainClassGmmtrain_class_gmm trains the Gaussian Mixture Model (GMM) referenced by GMMHandleGMMHandleGMMHandleGMMHandleGMMHandlegmmhandle. Before the GMM can be trained, all training samples to be used for the training must be stored in the GMM using add_sample_class_gmmadd_sample_class_gmmAddSampleClassGmmAddSampleClassGmmAddSampleClassGmmadd_sample_class_gmm, add_samples_image_class_gmmadd_samples_image_class_gmmAddSamplesImageClassGmmAddSamplesImageClassGmmAddSamplesImageClassGmmadd_samples_image_class_gmm, or read_samples_class_gmmread_samples_class_gmmReadSamplesClassGmmReadSamplesClassGmmReadSamplesClassGmmread_samples_class_gmm. After the training, new training samples can be added to the GMM and the GMM can be trained again.

During the training, the error that results from the GMM applied to the training vectors will be minimized with the expectation maximization (EM) algorithm.

MaxIterMaxIterMaxIterMaxItermaxItermax_iter specifies the maximum number of iterations per class for the EM algorithm. In practice, values between 20 and 200 should be sufficient for most problems. ThresholdThresholdThresholdThresholdthresholdthreshold specifies a threshold for the relative changes of the error. If the relative change in error exceeds the threshold after MaxIterMaxIterMaxIterMaxItermaxItermax_iter iterations, the algorithm will be canceled for this class. Because the algorithm starts with the maximum specified number of centers (parameter NumCentersNumCentersNumCentersNumCentersnumCentersnum_centers in create_class_gmmcreate_class_gmmCreateClassGmmCreateClassGmmCreateClassGmmcreate_class_gmm), in case of a premature termination the number of centers and the error for this class will not be optimal. In this case, a new training with different parameters (e.g., another value for RandSeedRandSeedRandSeedRandSeedrandSeedrand_seed in create_class_gmmcreate_class_gmmCreateClassGmmCreateClassGmmCreateClassGmmcreate_class_gmm) can be tried.

ClassPriorsClassPriorsClassPriorsClassPriorsclassPriorsclass_priors specifies the method of calculation of the class priors in GMM. If 'training'"training""training""training""training""training" is specified, the priors of the classes are taken from the proportion of the corresponding sample data during training. If 'uniform'"uniform""uniform""uniform""uniform""uniform" is specified, the priors are set equal to 1/NumClassesNumClassesNumClassesNumClassesnumClassesnum_classes for all classes.

RegularizeRegularizeRegularizeRegularizeregularizeregularize is used to regularize (nearly) singular covariance matrices during the training. A covariance matrix might collapse to singularity if it is trained with linearly dependent data. To avoid this, a small value specified by RegularizeRegularizeRegularizeRegularizeregularizeregularize is added to each main diagonal element of the covariance matrix, which prevents this element from becoming smaller than RegularizeRegularizeRegularizeRegularizeregularizeregularize. A recommended value for RegularizeRegularizeRegularizeRegularizeregularizeregularize is 0.0001. If RegularizeRegularizeRegularizeRegularizeregularizeregularize is set to 0.0, no regularization is performed.

The centers are initially randomly distributed. In individual cases, relatively high errors will result from the algorithm because the initial random values determined by RandSeedRandSeedRandSeedRandSeedrandSeedrand_seed in create_class_gmmcreate_class_gmmCreateClassGmmCreateClassGmmCreateClassGmmcreate_class_gmm lead to local minima. In this case, a new GMM with a different value for RandSeedRandSeedRandSeedRandSeedrandSeedrand_seed should be generated to test whether a significantly smaller error can be obtained.

It should be noted that, depending on the number of centers, the type of covariance matrix, and the number of training samples, the training can take from a few seconds to several hours.

On output, train_class_gmmtrain_class_gmmTrainClassGmmTrainClassGmmTrainClassGmmtrain_class_gmm returns in CentersCentersCentersCenterscenterscenters the number of centers per class that have been found to be optimal by the EM algorithm. These values can be used as a reference in NumCentersNumCentersNumCentersNumCentersnumCentersnum_centers (in create_class_gmmcreate_class_gmmCreateClassGmmCreateClassGmmCreateClassGmmcreate_class_gmm) for future GMMs. If the number of centers found by training a new GMM on integer training data is unexpectedly high, this might be corrected by adding a RandomizeRandomizeRandomizeRandomizerandomizerandomize noise to the training data in add_sample_class_gmmadd_sample_class_gmmAddSampleClassGmmAddSampleClassGmmAddSampleClassGmmadd_sample_class_gmm. IterIterIterIteriteriter contains the number of performed iterations per class. If a value in IterIterIterIteriteriter equals MaxIterMaxIterMaxIterMaxItermaxItermax_iter, the training algorithm has been terminated prematurely (see above).

Execution Information

This operator modifies the state of the following input parameter:

During execution of this operator, access to the value of this parameter must be synchronized if it is used across multiple threads.

Parameters

GMMHandleGMMHandleGMMHandleGMMHandleGMMHandlegmmhandle (input_control, state is modified)  class_gmm HClassGmm, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

GMM handle.

MaxIterMaxIterMaxIterMaxItermaxItermax_iter (input_control)  integer HTupleintHTupleHtuple (integer) (int / long) (Hlong) (Hlong)

Maximum number of iterations of the expectation maximization algorithm

Default value: 100

Suggested values: 10, 20, 30, 50, 100, 200

ThresholdThresholdThresholdThresholdthresholdthreshold (input_control)  real HTuplefloatHTupleHtuple (real) (double) (double) (double)

Threshold for relative change of the error for the expectation maximization algorithm to terminate.

Default value: 0.001

Suggested values: 0.001, 0.0001

Restriction: Threshold >= 0.0 && Threshold <= 1.0

ClassPriorsClassPriorsClassPriorsClassPriorsclassPriorsclass_priors (input_control)  string HTuplestrHTupleHtuple (string) (string) (HString) (char*)

Mode to determine the a-priori probabilities of the classes

Default value: 'training' "training" "training" "training" "training" "training"

List of values: 'training'"training""training""training""training""training", 'uniform'"uniform""uniform""uniform""uniform""uniform"

RegularizeRegularizeRegularizeRegularizeregularizeregularize (input_control)  real HTuplefloatHTupleHtuple (real) (double) (double) (double)

Regularization value for preventing covariance matrix singularity.

Default value: 0.0001

Restriction: Regularize >= 0.0 && Regularize < 1.0

CentersCentersCentersCenterscenterscenters (output_control)  integer-array HTupleSequence[int]HTupleHtuple (integer) (int / long) (Hlong) (Hlong)

Number of found centers per class

IterIterIterIteriteriter (output_control)  integer-array HTupleSequence[int]HTupleHtuple (integer) (int / long) (Hlong) (Hlong)

Number of executed iterations per class

Example (HDevelop)

create_class_gmm (NumDim, NumClasses, [1,5], 'full', 'none', 0, 42,\
                  GMMHandle)
* Add the training data
read_samples_class_gmm (GMMHandle, 'samples.gsf')
* Train the GMM
train_class_gmm (GMMHandle, 100, 1e-4, 'training', 1e-4, Centers, Iter)
* Write the Gaussian Mixture Model to file
write_class_gmm (GMMHandle, 'gmmclassifier.gmm')

Result

If the parameters are valid, the operator train_class_gmmtrain_class_gmmTrainClassGmmTrainClassGmmTrainClassGmmtrain_class_gmm returns the value 2 (H_MSG_TRUE). If necessary an exception is raised.

Possible Predecessors

add_sample_class_gmmadd_sample_class_gmmAddSampleClassGmmAddSampleClassGmmAddSampleClassGmmadd_sample_class_gmm, read_samples_class_gmmread_samples_class_gmmReadSamplesClassGmmReadSamplesClassGmmReadSamplesClassGmmread_samples_class_gmm

Possible Successors

evaluate_class_gmmevaluate_class_gmmEvaluateClassGmmEvaluateClassGmmEvaluateClassGmmevaluate_class_gmm, classify_class_gmmclassify_class_gmmClassifyClassGmmClassifyClassGmmClassifyClassGmmclassify_class_gmm, write_class_gmmwrite_class_gmmWriteClassGmmWriteClassGmmWriteClassGmmwrite_class_gmm, create_class_lut_gmmcreate_class_lut_gmmCreateClassLutGmmCreateClassLutGmmCreateClassLutGmmcreate_class_lut_gmm

Alternatives

read_class_gmmread_class_gmmReadClassGmmReadClassGmmReadClassGmmread_class_gmm

See also

create_class_gmmcreate_class_gmmCreateClassGmmCreateClassGmmCreateClassGmmcreate_class_gmm

References

Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 3; March 2002.

Module

Foundation