`train_class_mlp`

— Train a multilayer perceptron.

**train_class_mlp**( : : *MLPHandle*, *MaxIterations*, *WeightTolerance*, *ErrorTolerance* : *Error*, *ErrorLog*)

`train_class_mlp`

trains the multilayer perceptron (MLP) given
in * MLPHandle*. Before the MLP can be trained,

`add_sample_class_mlp`

or
`read_samples_class_mlp`

. If after the training new
additional training samples should be used a new MLP must be created
with `create_class_mlp`

, in which again `write_samples_class_mlp`

and
`read_samples_class_mlp`

, respectively. A second training
with additional training samples is not explicitly forbidden by
`train_class_mlp`

. However, this typically does not lead to
good results because the training of an MLP is a complex nonlinear
optimization problem, and consequently the second training with new
data will very likely lead to the fact that the optimization gets
stuck in a local minimum.
If a rejection class has been specified using
`set_rejection_params_class_mlp`

, before the actual training
the samples for the rejection class are generated.

During the training, the error the MLP achieves on the stored
training samples is minimized by using a nonlinear optimization
algorithm. If the MLP has been regularized with
`set_regularization_params_class_mlp`

, an additional weight
penalty term is taken into account. With this, the MLP weights
described in `create_class_mlp`

are determined. Furthermore,
if an automatic determination of the regularization parameters has
been specified with `set_regularization_params_class_mlp`

,
these parameters are optimized as well. As described at
`set_regularization_params_class_mlp`

, training the MLP with
automatic determination of the regularization parameters requires
significantly more time than training an unregularized MLP or an MLP
with fixed regularization parameters.

`create_class_mlp`

initializes the MLP weights with random
values to make it very likely that the optimization converges to the
global minimum of the error function. Nevertheless, in rare cases
it may happen that the random values determined with
`RandSeed`

in `create_class_mlp`

result in a relatively
large optimum error, i.e., that the optimization gets stuck in a
local minimum. If it can be conjectured that this has happened the
MLP should be created anew with a different value for
`RandSeed`

in order to check whether a significantly smaller
error can be achieved.

The parameters * MaxIterations*,

`WeightTolerance`

`ErrorTolerance`

`set_regularization_params_class_mlp`

, these parameters refer
to one training within one step of the evidence procedure.
`MaxIterations`

`WeightTolerance`

`ErrorTolerance`

`WeightTolerance`

`ErrorTolerance`

`MaxIterations`

On output, `train_class_mlp`

returns the error of the MLP with
the optimal weights on the training samples in * Error*.
Furthermore,

`ErrorLog`

`ErrorLog`

`ErrorLog`

`train_class_mlp`

again. It should be
noted, however, that this mechanism should `MaxIterations`

`MaxIterations`

`set_regularization_params_class_mlp`

, `Error`

`ErrorLog`

`set_regularization_params_class_mlp`

.- Multithreading type: reentrant (runs in parallel with non-exclusive operators).
- Multithreading scope: global (may be called from any thread).
- Automatically parallelized on internal data level.

This operator modifies the state of the following input parameter:

During execution of this operator, access to the value of this parameter must be synchronized if it is used across multiple threads.

`MLPHandle`

`→`

(handle)
MLP handle.

`MaxIterations`

`→`

(integer)
Maximum number of iterations of the optimization algorithm.

Default value: 200

Suggested values: 20, 40, 60, 80, 100, 120, 140, 160, 180, 200, 220, 240, 260, 280, 300

`WeightTolerance`

`→`

(real)
Threshold for the difference of the weights of the MLP between two iterations of the optimization algorithm.

Default value: 1.0

Suggested values: 1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001

Restriction: `WeightTolerance >= 1.0e-8`

`ErrorTolerance`

`→`

(real)
Threshold for the difference of the mean error of the MLP on the training data between two iterations of the optimization algorithm.

Default value: 0.01

Suggested values: 1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001

Restriction: `ErrorTolerance >= 1.0e-8`

`Error`

`→`

(real)
Mean error of the MLP on the training data.

`ErrorLog`

`→`

(real)
Mean error of the MLP on the training data as a function of the number of iterations of the optimization algorithm.

* Train an MLP create_class_mlp (NumIn, NumHidden, NumOut, 'softmax', \ 'normalization', 1, 42, MLPHandle) read_samples_class_mlp (MLPHandle, 'samples.mtf') train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog) write_class_mlp (MLPHandle, 'classifier.mlp')

If the parameters are valid, the operator `train_class_mlp`

returns the value 2 (H_MSG_TRUE). If necessary, an exception is
raised.

`train_class_mlp`

may return the error 9211 (Matrix is not
positive definite) if `Preprocessing`

=
*'canonical_variates'* is used. This typically indicates
that not enough training samples have been stored for each class.

`add_sample_class_mlp`

,
`read_samples_class_mlp`

,
`set_regularization_params_class_mlp`

`evaluate_class_mlp`

,
`classify_class_mlp`

,
`write_class_mlp`

,
`create_class_lut_mlp`

`train_dl_classifier_batch`

,
`read_class_mlp`

Christopher M. Bishop: “Neural Networks for Pattern Recognition”;
Oxford University Press, Oxford; 1995.

Andrew Webb: “Statistical Pattern Recognition”; Arnold, London;
1999.

Foundation