ClassesClassesClassesClasses | | | | Operators

train_class_mlpT_train_class_mlpTrainClassMlptrain_class_mlpTrainClassMlpTrainClassMlp (Operator)

Name

train_class_mlpT_train_class_mlpTrainClassMlptrain_class_mlpTrainClassMlpTrainClassMlp — Train a multilayer perceptron.

Signature

train_class_mlp( : : MLPHandle, MaxIterations, WeightTolerance, ErrorTolerance : Error, ErrorLog)

Herror T_train_class_mlp(const Htuple MLPHandle, const Htuple MaxIterations, const Htuple WeightTolerance, const Htuple ErrorTolerance, Htuple* Error, Htuple* ErrorLog)

Herror train_class_mlp(const HTuple& MLPHandle, const HTuple& MaxIterations, const HTuple& WeightTolerance, const HTuple& ErrorTolerance, HTuple* Error, HTuple* ErrorLog)

double HClassMlp::TrainClassMlp(const HTuple& MaxIterations, const HTuple& WeightTolerance, const HTuple& ErrorTolerance, HTuple* ErrorLog) const

void TrainClassMlp(const HTuple& MLPHandle, const HTuple& MaxIterations, const HTuple& WeightTolerance, const HTuple& ErrorTolerance, HTuple* Error, HTuple* ErrorLog)

double HClassMlp::TrainClassMlp(Hlong MaxIterations, double WeightTolerance, double ErrorTolerance, HTuple* ErrorLog) const

void HOperatorSetX.TrainClassMlp(
[in] VARIANT MLPHandle, [in] VARIANT MaxIterations, [in] VARIANT WeightTolerance, [in] VARIANT ErrorTolerance, [out] VARIANT* Error, [out] VARIANT* ErrorLog)

double HClassMlpX.TrainClassMlp(
[in] Hlong MaxIterations, [in] double WeightTolerance, [in] double ErrorTolerance, [out] VARIANT* ErrorLog)

static void HOperatorSet.TrainClassMlp(HTuple MLPHandle, HTuple maxIterations, HTuple weightTolerance, HTuple errorTolerance, out HTuple error, out HTuple errorLog)

double HClassMlp.TrainClassMlp(int maxIterations, double weightTolerance, double errorTolerance, out HTuple errorLog)

Description

train_class_mlptrain_class_mlpTrainClassMlptrain_class_mlpTrainClassMlpTrainClassMlp trains the multilayer perceptron (MLP) given in MLPHandleMLPHandleMLPHandleMLPHandleMLPHandleMLPHandle. Before the MLP can be trained, all training samples to be used for the training must be stored in the MLP using add_sample_class_mlpadd_sample_class_mlpAddSampleClassMlpadd_sample_class_mlpAddSampleClassMlpAddSampleClassMlp or read_samples_class_mlpread_samples_class_mlpReadSamplesClassMlpread_samples_class_mlpReadSamplesClassMlpReadSamplesClassMlp. If after the training new additional training samples should be used a new MLP must be created with create_class_mlpcreate_class_mlpCreateClassMlpcreate_class_mlpCreateClassMlpCreateClassMlp, in which again all training samples to be used (i.e., the original ones and the additional ones) must be stored. In these cases, it is useful to save and read the training data with write_samples_class_mlpwrite_samples_class_mlpWriteSamplesClassMlpwrite_samples_class_mlpWriteSamplesClassMlpWriteSamplesClassMlp and read_samples_class_mlpread_samples_class_mlpReadSamplesClassMlpread_samples_class_mlpReadSamplesClassMlpReadSamplesClassMlp, respectively. A second training with additional training samples is not explicitly forbidden by train_class_mlptrain_class_mlpTrainClassMlptrain_class_mlpTrainClassMlpTrainClassMlp. However, this typically does not lead to good results because the training of an MLP is a complex nonlinear optimization problem, and consequently the second training with new data will very likely lead to the fact that the optimization gets stuck in a local minimum.

During the training, the error the MLP achieves on the stored training samples is minimized by using a nonlinear optimization algorithm. With this, the MLP weights described in create_class_mlpcreate_class_mlpCreateClassMlpcreate_class_mlpCreateClassMlpCreateClassMlp are determined. create_class_mlpcreate_class_mlpCreateClassMlpcreate_class_mlpCreateClassMlpCreateClassMlp initializes the weights with random values to make it very likely that the optimization converges to the global minimum of the error function. Nevertheless, in rare cases it may happen that the random values determined with RandSeed in create_class_mlpcreate_class_mlpCreateClassMlpcreate_class_mlpCreateClassMlpCreateClassMlp result in a relatively large optimum error, i.e., that the optimization gets stuck in a local minimum. If it can be conjectured that this has happened the MLP should be created anew with a different value for RandSeed in order to check whether a significantly smaller error can be achieved.

The parameters MaxIterationsMaxIterationsMaxIterationsMaxIterationsMaxIterationsmaxIterations, WeightToleranceWeightToleranceWeightToleranceWeightToleranceWeightToleranceweightTolerance, and ErrorToleranceErrorToleranceErrorToleranceErrorToleranceErrorToleranceerrorTolerance control the nonlinear optimization algorithm. MaxIterationsMaxIterationsMaxIterationsMaxIterationsMaxIterationsmaxIterations specifies the maximum number of iterations of the optimization algorithm. In practice, values between 100 and 200 should be sufficient for most problems. WeightToleranceWeightToleranceWeightToleranceWeightToleranceWeightToleranceweightTolerance specifies a threshold for the change of the weights per iteration. Here, the absolute value of the change of the weights between two iterations is summed. Hence, this value depends on the number of weights as well as the size of the weights, which in turn depend on the scaling of the training data. Typically, values between 0.00001 and 1 should be used. ErrorToleranceErrorToleranceErrorToleranceErrorToleranceErrorToleranceerrorTolerance specifies a threshold for the change of the error value per iteration. This value depends on the number of training samples as well as the number of output variables of the MLP. Also here, values between 0.00001 and 1 should typically be used. The optimization is terminated if the weight change is smaller than WeightToleranceWeightToleranceWeightToleranceWeightToleranceWeightToleranceweightTolerance and the change of the error value is smaller than ErrorToleranceErrorToleranceErrorToleranceErrorToleranceErrorToleranceerrorTolerance. In any case, the optimization is terminated after at most MaxIterationsMaxIterationsMaxIterationsMaxIterationsMaxIterationsmaxIterations iterations. It should be noted that, depending on the size of the MLP and the number of training samples, the training can take from a few seconds to several hours.

On output, train_class_mlptrain_class_mlpTrainClassMlptrain_class_mlpTrainClassMlpTrainClassMlp returns the error of the MLP with the optimal weights on the training samples in ErrorErrorErrorErrorErrorerror. Furthermore, ErrorLogErrorLogErrorLogErrorLogErrorLogerrorLog contains the error value as a function of the number of iterations. With this, it is possible to decide whether a second training of the MLP with the same training data without creating the MLP anew makes sense. If ErrorLogErrorLogErrorLogErrorLogErrorLogerrorLog is regarded as a function, it should drop off steeply initially, while leveling out very flatly at the end. If ErrorLogErrorLogErrorLogErrorLogErrorLogerrorLog is still relatively steep at the end, it usually makes sense to call train_class_mlptrain_class_mlpTrainClassMlptrain_class_mlpTrainClassMlpTrainClassMlp again. It should be noted, however, that this mechanism should not be used to train the MLP successively with MaxIterationsMaxIterationsMaxIterationsMaxIterationsMaxIterationsmaxIterations = 1 (or other small values for MaxIterationsMaxIterationsMaxIterationsMaxIterationsMaxIterationsmaxIterations) because this will substantially increase the number of iterations required to train the MLP.

Parallelization

Parameters

MLPHandleMLPHandleMLPHandleMLPHandleMLPHandleMLPHandle (input_control)  class_mlp HClassMlp, HTupleHTupleHClassMlp, HTupleHClassMlpX, VARIANTHtuple (integer) (IntPtr) (Hlong) (Hlong) (Hlong) (Hlong)

MLP handle.

MaxIterationsMaxIterationsMaxIterationsMaxIterationsMaxIterationsmaxIterations (input_control)  integer HTupleHTupleHTupleVARIANTHtuple (integer) (int / long) (Hlong) (Hlong) (Hlong) (Hlong)

Maximum number of iterations of the optimization algorithm.

Default value: 200

Suggested values: 20, 40, 60, 80, 100, 120, 140, 160, 180, 200, 220, 240, 260, 280, 300

WeightToleranceWeightToleranceWeightToleranceWeightToleranceWeightToleranceweightTolerance (input_control)  real HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

Threshold for the difference of the weights of the MLP between two iterations of the optimization algorithm.

Default value: 1.0

Suggested values: 1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001

Restriction: WeightTolerance >= 1.0e-8

ErrorToleranceErrorToleranceErrorToleranceErrorToleranceErrorToleranceerrorTolerance (input_control)  real HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

Threshold for the difference of the mean error of the MLP on the training data between two iterations of the optimization algorithm.

Default value: 0.01

Suggested values: 1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001

Restriction: ErrorTolerance >= 1.0e-8

ErrorErrorErrorErrorErrorerror (output_control)  real HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

Mean error of the MLP on the training data.

ErrorLogErrorLogErrorLogErrorLogErrorLogerrorLog (output_control)  real-array HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

Mean error of the MLP on the training data as a function of the number of iterations of the optimization algorithm.

Example (HDevelop)

* Train an MLP
create_class_mlp (NIn, NHidden, NOut, 'softmax', 'normalization', 1,\
                  42, MLPHandle)
read_samples_class_mlp (MLPHandle, 'samples.mtf')
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
write_class_mlp (MLPHandle, 'classifier.mlp')
clear_class_mlp (MLPHandle)

Result

If the parameters are valid, the operator train_class_mlptrain_class_mlpTrainClassMlptrain_class_mlpTrainClassMlpTrainClassMlp returns the value 2 (H_MSG_TRUE). If necessary, an exception is raised.

train_class_mlptrain_class_mlpTrainClassMlptrain_class_mlpTrainClassMlpTrainClassMlp may return the error 9211 (Matrix is not positive definite) if Preprocessing = 'canonical_variates'"canonical_variates""canonical_variates""canonical_variates""canonical_variates""canonical_variates" is used. This typically indicates that not enough training samples have been stored for each class.

Possible Predecessors

add_sample_class_mlpadd_sample_class_mlpAddSampleClassMlpadd_sample_class_mlpAddSampleClassMlpAddSampleClassMlp, read_samples_class_mlpread_samples_class_mlpReadSamplesClassMlpread_samples_class_mlpReadSamplesClassMlpReadSamplesClassMlp

Possible Successors

evaluate_class_mlpevaluate_class_mlpEvaluateClassMlpevaluate_class_mlpEvaluateClassMlpEvaluateClassMlp, classify_class_mlpclassify_class_mlpClassifyClassMlpclassify_class_mlpClassifyClassMlpClassifyClassMlp, write_class_mlpwrite_class_mlpWriteClassMlpwrite_class_mlpWriteClassMlpWriteClassMlp, create_class_lut_mlpcreate_class_lut_mlpCreateClassLutMlpcreate_class_lut_mlpCreateClassLutMlpCreateClassLutMlp

Alternatives

read_class_mlpread_class_mlpReadClassMlpread_class_mlpReadClassMlpReadClassMlp

See also

create_class_mlpcreate_class_mlpCreateClassMlpcreate_class_mlpCreateClassMlpCreateClassMlp

References

Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.

Module

Foundation


ClassesClassesClassesClasses | | | | Operators