ClassesClassesClassesClasses | | | | Operators

create_class_mlpcreate_class_mlpCreateClassMlpcreate_class_mlpCreateClassMlpCreateClassMlp (Operator)

Name

create_class_mlpcreate_class_mlpCreateClassMlpcreate_class_mlpCreateClassMlpCreateClassMlp — Create a multilayer perceptron for classification or regression.

Signature

create_class_mlp( : : NumInput, NumHidden, NumOutput, OutputFunction, Preprocessing, NumComponents, RandSeed : MLPHandle)

Herror create_class_mlp(const Hlong NumInput, const Hlong NumHidden, const Hlong NumOutput, const char* OutputFunction, const char* Preprocessing, const Hlong NumComponents, const Hlong RandSeed, Hlong* MLPHandle)

Herror T_create_class_mlp(const Htuple NumInput, const Htuple NumHidden, const Htuple NumOutput, const Htuple OutputFunction, const Htuple Preprocessing, const Htuple NumComponents, const Htuple RandSeed, Htuple* MLPHandle)

Herror create_class_mlp(const HTuple& NumInput, const HTuple& NumHidden, const HTuple& NumOutput, const HTuple& OutputFunction, const HTuple& Preprocessing, const HTuple& NumComponents, const HTuple& RandSeed, Hlong* MLPHandle)

void HClassMlp::CreateClassMlp(const HTuple& NumInput, const HTuple& NumHidden, const HTuple& NumOutput, const HTuple& OutputFunction, const HTuple& Preprocessing, const HTuple& NumComponents, const HTuple& RandSeed)

void CreateClassMlp(const HTuple& NumInput, const HTuple& NumHidden, const HTuple& NumOutput, const HTuple& OutputFunction, const HTuple& Preprocessing, const HTuple& NumComponents, const HTuple& RandSeed, HTuple* MLPHandle)

void HClassMlp::HClassMlp(Hlong NumInput, Hlong NumHidden, Hlong NumOutput, const HString& OutputFunction, const HString& Preprocessing, Hlong NumComponents, Hlong RandSeed)

void HClassMlp::HClassMlp(Hlong NumInput, Hlong NumHidden, Hlong NumOutput, const char* OutputFunction, const char* Preprocessing, Hlong NumComponents, Hlong RandSeed)

void HClassMlp::CreateClassMlp(Hlong NumInput, Hlong NumHidden, Hlong NumOutput, const HString& OutputFunction, const HString& Preprocessing, Hlong NumComponents, Hlong RandSeed)

void HClassMlp::CreateClassMlp(Hlong NumInput, Hlong NumHidden, Hlong NumOutput, const char* OutputFunction, const char* Preprocessing, Hlong NumComponents, Hlong RandSeed)

void HOperatorSetX.CreateClassMlp(
[in] VARIANT NumInput, [in] VARIANT NumHidden, [in] VARIANT NumOutput, [in] VARIANT OutputFunction, [in] VARIANT Preprocessing, [in] VARIANT NumComponents, [in] VARIANT RandSeed, [out] VARIANT* MLPHandle)

void HClassMlpX.CreateClassMlp(
[in] Hlong NumInput, [in] Hlong NumHidden, [in] Hlong NumOutput, [in] BSTR OutputFunction, [in] BSTR Preprocessing, [in] Hlong NumComponents, [in] Hlong RandSeed)

static void HOperatorSet.CreateClassMlp(HTuple numInput, HTuple numHidden, HTuple numOutput, HTuple outputFunction, HTuple preprocessing, HTuple numComponents, HTuple randSeed, out HTuple MLPHandle)

public HClassMlp(int numInput, int numHidden, int numOutput, string outputFunction, string preprocessing, int numComponents, int randSeed)

void HClassMlp.CreateClassMlp(int numInput, int numHidden, int numOutput, string outputFunction, string preprocessing, int numComponents, int randSeed)

Description

create_class_mlpcreate_class_mlpCreateClassMlpcreate_class_mlpCreateClassMlpCreateClassMlp creates a neural net in the form of a multilayer perceptron (MLP), which can be used for classification or regression (function approximation), depending on how OutputFunctionOutputFunctionOutputFunctionOutputFunctionOutputFunctionoutputFunction is set. The MLP consists of three layers: an input layer with NumInputNumInputNumInputNumInputNumInputnumInput input variables (units, neurons), a hidden layer with NumHiddenNumHiddenNumHiddenNumHiddenNumHiddennumHidden units, and an output layer with NumOutputNumOutputNumOutputNumOutputNumOutputnumOutput output variables. The MLP performs the following steps to calculate the activations of the hidden units from the input data (the so-called feature vector):

Here, the matrix and the vector are the weights of the input layer (first layer) of the MLP. In the hidden layer (second layer), the activations are transformed in a first step by using linear combinations of the variables in an analogous manner as above:
Here, the matrix and the vector are the weights of the second layer of the MLP.

The activation function used in the output layer can be determined by setting OutputFunctionOutputFunctionOutputFunctionOutputFunctionOutputFunctionoutputFunction. For OutputFunctionOutputFunctionOutputFunctionOutputFunctionOutputFunctionoutputFunction = 'linear'"linear""linear""linear""linear""linear", the data are simply copied:

This type of activation function should be used for regression problems (function approximation). This activation function is not suited for classification problems.

For OutputFunctionOutputFunctionOutputFunctionOutputFunctionOutputFunctionoutputFunction = 'logistic'"logistic""logistic""logistic""logistic""logistic", the activations are computed as follows:

This type of activation function should be used for classification problems with multiple (NumOutputNumOutputNumOutputNumOutputNumOutputnumOutput) independent logical attributes as output. This kind of classification problem is relatively rare in practice.

For OutputFunctionOutputFunctionOutputFunctionOutputFunctionOutputFunctionoutputFunction = 'softmax'"softmax""softmax""softmax""softmax""softmax", the activations are computed as follows:

This type of activation function should be used for common classification problems with multiple (NumOutputNumOutputNumOutputNumOutputNumOutputnumOutput) mutually exclusive classes as output. In particular, OutputFunctionOutputFunctionOutputFunctionOutputFunctionOutputFunctionoutputFunction = 'softmax'"softmax""softmax""softmax""softmax""softmax" must be used for the classification of pixel data with classify_image_class_mlpclassify_image_class_mlpClassifyImageClassMlpclassify_image_class_mlpClassifyImageClassMlpClassifyImageClassMlp.

The parameters PreprocessingPreprocessingPreprocessingPreprocessingPreprocessingpreprocessing and NumComponentsNumComponentsNumComponentsNumComponentsNumComponentsnumComponents can be used to specify a preprocessing of the feature vectors. For PreprocessingPreprocessingPreprocessingPreprocessingPreprocessingpreprocessing = 'none'"none""none""none""none""none", the feature vectors are passed unaltered to the MLP. NumComponentsNumComponentsNumComponentsNumComponentsNumComponentsnumComponents is ignored in this case.

For all other values of PreprocessingPreprocessingPreprocessingPreprocessingPreprocessingpreprocessing, the training data set is used to compute a transformation of the feature vectors during the training as well as later in the classification or evaluation.

For PreprocessingPreprocessingPreprocessingPreprocessingPreprocessingpreprocessing = 'normalization'"normalization""normalization""normalization""normalization""normalization", the feature vectors are normalized by subtracting the mean of the training vectors and dividing the result by the standard deviation of the individual components of the training vectors. Hence, the transformed feature vectors have a mean of 0 and a standard deviation of 1. The normalization does not change the length of the feature vector. NumComponentsNumComponentsNumComponentsNumComponentsNumComponentsnumComponents is ignored in this case. This transformation can be used if the mean and standard deviation of the feature vectors differs substantially from 0 and 1, respectively, or for data in which the components of the feature vectors are measured in different units (e.g., if some of the data are gray value features and some are region features, or if region features are mixed, e.g., 'circularity' (unit: scalar) and 'area' (unit: pixel squared)). In these cases, the training of the net will typically require fewer iterations than without normalization.

For PreprocessingPreprocessingPreprocessingPreprocessingPreprocessingpreprocessing = 'principal_components'"principal_components""principal_components""principal_components""principal_components""principal_components", a principal component analysis is performed. First, the feature vectors are normalized (see above). Then, an orthogonal transformation (a rotation in the feature space) that decorrelates the training vectors is computed. After the transformation, the mean of the training vectors is 0 and the covariance matrix of the training vectors is a diagonal matrix. The transformation is chosen such that the transformed features that contain the most variation is contained in the first components of the transformed feature vector. With this, it is possible to omit the transformed features in the last components of the feature vector, which typically are mainly influenced by noise, without losing a large amount of information. The parameter NumComponentsNumComponentsNumComponentsNumComponentsNumComponentsnumComponents can be used to determine how many of the transformed feature vector components should be used. Up to NumInputNumInputNumInputNumInputNumInputnumInput components can be selected. The operator get_prep_info_class_mlpget_prep_info_class_mlpGetPrepInfoClassMlpget_prep_info_class_mlpGetPrepInfoClassMlpGetPrepInfoClassMlp can be used to determine how much information each transformed component contains. Hence, it aids the selection of NumComponentsNumComponentsNumComponentsNumComponentsNumComponentsnumComponents. Like data normalization, this transformation can be used if the mean and standard deviation of the feature vectors differs substantially from 0 and 1, respectively, or for feature vectors in which the components of the data are measured in different units. In addition, this transformation is useful if it can be expected that the features are highly correlated.

In contrast to the above three transformations, which can be used for all MLP types, the transformation specified by PreprocessingPreprocessingPreprocessingPreprocessingPreprocessingpreprocessing = 'canonical_variates'"canonical_variates""canonical_variates""canonical_variates""canonical_variates""canonical_variates" can only be used if the MLP is used as a classifier with OutputFunctionOutputFunctionOutputFunctionOutputFunctionOutputFunctionoutputFunction = 'softmax'"softmax""softmax""softmax""softmax""softmax"). The computation of the canonical variates is also called linear discriminant analysis. In this case, a transformation that first normalizes the training vectors and then decorrelates the training vectors on average over all classes is computed. At the same time, the transformation maximally separates the mean values of the individual classes. As for PreprocessingPreprocessingPreprocessingPreprocessingPreprocessingpreprocessing = 'principal_components'"principal_components""principal_components""principal_components""principal_components""principal_components", the transformed components are sorted by information content, and hence transformed components with little information content can be omitted. For canonical variates, up to min(NumOutputNumOutputNumOutputNumOutputNumOutputnumOutput - 1, NumInputNumInputNumInputNumInputNumInputnumInput) components can be selected. Also in this case, the information content of the transformed components can be determined with get_prep_info_class_mlpget_prep_info_class_mlpGetPrepInfoClassMlpget_prep_info_class_mlpGetPrepInfoClassMlpGetPrepInfoClassMlp. Like principal component analysis, canonical variates can be used to reduce the amount of data without losing a large amount of information, while additionally optimizing the separability of the classes after the data reduction.

For the last two types of transformations ('principal_components'"principal_components""principal_components""principal_components""principal_components""principal_components" and 'canonical_variates'"canonical_variates""canonical_variates""canonical_variates""canonical_variates""canonical_variates"), the actual number of input units of the MLP is determined by NumComponentsNumComponentsNumComponentsNumComponentsNumComponentsnumComponents, whereas NumInputNumInputNumInputNumInputNumInputnumInput determines the dimensionality of the input data (i.e., the length of the untransformed feature vector). Hence, by using one of these two transformations, the number of input variables, and thus usually also the number of hidden units can be reduced. With this, the time needed to train the MLP and to evaluate and classify a feature vector is typically reduced.

Usually, NumHiddenNumHiddenNumHiddenNumHiddenNumHiddennumHidden should be selected in the order of magnitude of NumInputNumInputNumInputNumInputNumInputnumInput and NumOutputNumOutputNumOutputNumOutputNumOutputnumOutput. In many cases, much smaller values of NumHiddenNumHiddenNumHiddenNumHiddenNumHiddennumHidden already lead to very good classification results. If NumHiddenNumHiddenNumHiddenNumHiddenNumHiddennumHidden is chosen too large, the MLP may overfit the training data, which typically leads to bad generalization properties, i.e., the MLP learns the training data very well, but does not return very good results on unknown data.

create_class_mlpcreate_class_mlpCreateClassMlpcreate_class_mlpCreateClassMlpCreateClassMlp initializes the above described weights with random numbers. To ensure that the results of training the classifier with train_class_mlptrain_class_mlpTrainClassMlptrain_class_mlpTrainClassMlpTrainClassMlp are reproducible, the seed value of the random number generator is passed in RandSeedRandSeedRandSeedRandSeedRandSeedrandSeed. If the training results in a relatively large error, it sometimes may be possible to achieve a smaller error by selecting a different value for RandSeedRandSeedRandSeedRandSeedRandSeedrandSeed and retraining an MLP.

After the MLP has been created, typically training samples are added to the MLP by repeatedly calling add_sample_class_mlpadd_sample_class_mlpAddSampleClassMlpadd_sample_class_mlpAddSampleClassMlpAddSampleClassMlp or read_samples_class_mlpread_samples_class_mlpReadSamplesClassMlpread_samples_class_mlpReadSamplesClassMlpReadSamplesClassMlp. After this, the MLP is typically trained using train_class_mlptrain_class_mlpTrainClassMlptrain_class_mlpTrainClassMlpTrainClassMlp. Hereafter, the MLP can be saved using write_class_mlpwrite_class_mlpWriteClassMlpwrite_class_mlpWriteClassMlpWriteClassMlp. Alternatively, the MLP can be used immediately after training to evaluate data using evaluate_class_mlpevaluate_class_mlpEvaluateClassMlpevaluate_class_mlpEvaluateClassMlpEvaluateClassMlp or, if the MLP is used as a classifier (i.e., for OutputFunctionOutputFunctionOutputFunctionOutputFunctionOutputFunctionoutputFunction = 'softmax'"softmax""softmax""softmax""softmax""softmax"), to classify data using classify_class_mlpclassify_class_mlpClassifyClassMlpclassify_class_mlpClassifyClassMlpClassifyClassMlp.

The training of the MLP will usually result in very sharp boundaries between the different classes, i.e., the confidence for one class will drop from close to 1 (within the region of the class) to close to 0 (within the region of a different class) within a very narrow “band” in the feature space. If the classes do not overlap, this transition happens at a suitable location between the classes; if the classes overlap, the transition happens at a suitable location within the overlapping area. While this sharp transition is desirable in many applications, in some applications a smoother transition between different classes (i.e., a transition within a wider “band” in the feature space) is desirable to reflect a level of uncertainty within the region in the feature space between the classes. Furthermore, as described above, it may be desirable to prevent overfitting of the MLP to the training data. For these purposes, the MLP can be regularized by using set_regularization_params_class_mlpset_regularization_params_class_mlpSetRegularizationParamsClassMlpset_regularization_params_class_mlpSetRegularizationParamsClassMlpSetRegularizationParamsClassMlp.

An MLP, as defined above, has no inherent capability for novelty detection, i.e., it will classify a random feature vector into one of the classes with a confidence close to 1 (unless the random feature vector happens to lie in a region of the feature space in which the training samples of different classes overlap). In some applications, however, it is desirable to reject feature vectors that do not lie close to any class, where “closesness” defined by the proximity of the feature vector to the collection of feature vectors in the training set. To provide an MLP with the ability for novelty detection, i.e., to reject feature vectors that do not belong to any class, an explicit rejection class can be created by setting NumOutputNumOutputNumOutputNumOutputNumOutputnumOutput to the number of actual classes plus 1. Then, set_rejection_params_class_mlpset_rejection_params_class_mlpSetRejectionParamsClassMlpset_rejection_params_class_mlpSetRejectionParamsClassMlpSetRejectionParamsClassMlp can be used to configure train_class_mlptrain_class_mlpTrainClassMlptrain_class_mlpTrainClassMlpTrainClassMlp to automatically generate samples for this rejection class.

The combination of regularization and an automatic generation of a rejection class is useful in many applications since it provides a smooth transition between the actual classes and from the actual classes to the rejection class. This reflects the requirement of these applications that only feature vectors within the area of the feature space that corresponds to the training samples of each class should have a confidence close to 1, whereas random feature vectors not belonging to any class should have a confidence close to 0, and that transitions between the classes should be smooth, reflecting a growing degree of uncertainty the farther a feature vector lies from the respective class. In particular, OCR applications sometimes have this requirement (see create_ocr_class_mlpcreate_ocr_class_mlpCreateOcrClassMlpcreate_ocr_class_mlpCreateOcrClassMlpCreateOcrClassMlp).

A comparison of the MLP and the support vector machine (SVM) (see create_class_svmcreate_class_svmCreateClassSvmcreate_class_svmCreateClassSvmCreateClassSvm) typically shows that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition rates than MLPs. The MLP is faster at classification and should therefore be preferred in time critical applications. Please note that this guideline assumes optimal tuning of the parameters.

Parallelization

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific operators even though the handle is used as an input parameter by those operators.

Parameters

NumInputNumInputNumInputNumInputNumInputnumInput (input_control)  integer HTupleHTupleHTupleVARIANTHtuple (integer) (int / long) (Hlong) (Hlong) (Hlong) (Hlong)

Number of input variables (features) of the MLP.

Default value: 20

Suggested values: 1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100

Restriction: NumInput >= 1

NumHiddenNumHiddenNumHiddenNumHiddenNumHiddennumHidden (input_control)  integer HTupleHTupleHTupleVARIANTHtuple (integer) (int / long) (Hlong) (Hlong) (Hlong) (Hlong)

Number of hidden units of the MLP.

Default value: 10

Suggested values: 1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 120, 150

Restriction: NumHidden >= 1

NumOutputNumOutputNumOutputNumOutputNumOutputnumOutput (input_control)  integer HTupleHTupleHTupleVARIANTHtuple (integer) (int / long) (Hlong) (Hlong) (Hlong) (Hlong)

Number of output variables (classes) of the MLP.

Default value: 5

Suggested values: 1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 120, 150

Restriction: NumOutput >= 1

OutputFunctionOutputFunctionOutputFunctionOutputFunctionOutputFunctionoutputFunction (input_control)  string HTupleHTupleHTupleVARIANTHtuple (string) (string) (HString) (char*) (BSTR) (char*)

Type of the activation function in the output layer of the MLP.

Default value: 'softmax' "softmax" "softmax" "softmax" "softmax" "softmax"

List of values: 'linear'"linear""linear""linear""linear""linear", 'logistic'"logistic""logistic""logistic""logistic""logistic", 'softmax'"softmax""softmax""softmax""softmax""softmax"

PreprocessingPreprocessingPreprocessingPreprocessingPreprocessingpreprocessing (input_control)  string HTupleHTupleHTupleVARIANTHtuple (string) (string) (HString) (char*) (BSTR) (char*)

Type of preprocessing used to transform the feature vectors.

Default value: 'normalization' "normalization" "normalization" "normalization" "normalization" "normalization"

List of values: 'canonical_variates'"canonical_variates""canonical_variates""canonical_variates""canonical_variates""canonical_variates", 'none'"none""none""none""none""none", 'normalization'"normalization""normalization""normalization""normalization""normalization", 'principal_components'"principal_components""principal_components""principal_components""principal_components""principal_components"

NumComponentsNumComponentsNumComponentsNumComponentsNumComponentsnumComponents (input_control)  integer HTupleHTupleHTupleVARIANTHtuple (integer) (int / long) (Hlong) (Hlong) (Hlong) (Hlong)

Preprocessing parameter: Number of transformed features (ignored for PreprocessingPreprocessingPreprocessingPreprocessingPreprocessingpreprocessing = 'none'"none""none""none""none""none" and PreprocessingPreprocessingPreprocessingPreprocessingPreprocessingpreprocessing = 'normalization'"normalization""normalization""normalization""normalization""normalization").

Default value: 10

Suggested values: 1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100

Restriction: NumComponents >= 1

RandSeedRandSeedRandSeedRandSeedRandSeedrandSeed (input_control)  integer HTupleHTupleHTupleVARIANTHtuple (integer) (int / long) (Hlong) (Hlong) (Hlong) (Hlong)

Seed value of the random number generator that is used to initialize the MLP with random values.

Default value: 42

MLPHandleMLPHandleMLPHandleMLPHandleMLPHandleMLPHandle (output_control)  class_mlp HClassMlp, HTupleHTupleHClassMlp, HTupleHClassMlpX, VARIANTHtuple (integer) (IntPtr) (Hlong) (Hlong) (Hlong) (Hlong)

MLP handle.

Example (HDevelop)

* Use the MLP for regression (function approximation)
create_class_mlp (1, NumHidden, 1, 'linear', 'none', 1, 42, MLPHandle)
* Generate the training data
* D = [...]
* T = [...]
* Add the training data
for J := 0 to NumData-1 by 1
    add_sample_class_mlp (MLPHandle, D[J], T[J])
endfor
* Train the MLP
train_class_mlp (MLPHandle, 200, 0.001, 0.001, Error, ErrorLog)
* Generate test data
* X = [...]
* Compute the output of the MLP on the test data
for J := 0 to N-1 by 1
    evaluate_class_mlp (MLPHandle, X[J], Y)
endfor
clear_class_mlp (MLPHandle)

* Use the MLP for classification
create_class_mlp (NumIn, NumHidden, NumOut, 'softmax', \
                  'normalization', NumIn, 42, MLPHandle)
* Generate and add the training data
for J := 0 to NumData-1 by 1
    * Generate training features and classes
    * Data = [...]
    * Class = [...]
    add_sample_class_mlp (MLPHandle, Data, Class)
endfor
* Train the MLP
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
* Use the MLP to classify unknown data
for J := 0 to N-1 by 1
    * Extract features
    * Features = [...]
    classify_class_mlp (MLPHandle, Features, 1, Class, Confidence)
endfor
clear_class_mlp (MLPHandle)

Result

If the parameters are valid, the operator create_class_mlpcreate_class_mlpCreateClassMlpcreate_class_mlpCreateClassMlpCreateClassMlp returns the value 2 (H_MSG_TRUE). If necessary, an exception is raised.

Possible Successors

add_sample_class_mlpadd_sample_class_mlpAddSampleClassMlpadd_sample_class_mlpAddSampleClassMlpAddSampleClassMlp, set_regularization_params_class_mlpset_regularization_params_class_mlpSetRegularizationParamsClassMlpset_regularization_params_class_mlpSetRegularizationParamsClassMlpSetRegularizationParamsClassMlp, set_rejection_params_class_mlpset_rejection_params_class_mlpSetRejectionParamsClassMlpset_rejection_params_class_mlpSetRejectionParamsClassMlpSetRejectionParamsClassMlp

Alternatives

create_class_svmcreate_class_svmCreateClassSvmcreate_class_svmCreateClassSvmCreateClassSvm, create_class_gmmcreate_class_gmmCreateClassGmmcreate_class_gmmCreateClassGmmCreateClassGmm

See also

clear_class_mlpclear_class_mlpClearClassMlpclear_class_mlpClearClassMlpClearClassMlp, train_class_mlptrain_class_mlpTrainClassMlptrain_class_mlpTrainClassMlpTrainClassMlp, classify_class_mlpclassify_class_mlpClassifyClassMlpclassify_class_mlpClassifyClassMlpClassifyClassMlp, evaluate_class_mlpevaluate_class_mlpEvaluateClassMlpevaluate_class_mlpEvaluateClassMlpEvaluateClassMlp

References

Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.

Module

Foundation


ClassesClassesClassesClasses | | | | Operators