TRAINING PROBABILISTIC NEURAL NETWORKS ON THE SINGLE CLASS PATTERN MATRIX AND ON CONCATENATION OF PATTERN MATRICES

Authors

  • V V Romanuke
  • G A Yegoshyna
  • S M Voronoy

DOI:

https://doi.org/10.33243/2518-7139-2019-1-2-86-97

Abstract

A possibility to optimize the probabilistic neural network is studied by building an efficient training set. Commonly, the training dataset for a probabilistic neural network is a matrix whose columns represent classes. If every class has only one column, the matrix is called the single class pattern matrix. However, the simple architecture of the probabilistic neural network does not imply that the class pattern must be single. First, the range of values for a class feature can be too wide. Then it is desirable to break it into subranges, each of which will give its own average and thus a few patterns for this class will be formed. Second, a class feature can have a finite number of its values, where every value has the same importance. Then it would be incorrect to calculate an average and use it in the respective single class pattern. Thus, it is studied whether concatenation of pattern matrices into a long pattern matrix is reasonable. In fact, the goal of the research is to ascertain whether it is efficient to build probabilistic neural networks on long pattern matrices. The criterion of the efficiency is performance of the probabilistic neural network, i. e. either its accuracy or percentage of errors. To achieve the goal, performance of the probabilistic neural network is estimated on the case when the class is described by a few class patterns. The probabilistic neural networks are then tested for the two subcases: when objects generated by different class patterns are fed to the input, and when objects are generated by a generalized single class pattern. Eventually, it is ascertained that training probabilistic neural networks on the single class pattern matrix (obtained either by averaging over the available pattern matrices or just by using one pattern per class) is more efficient when the objects to be classified do not inherit any class pattern numerical properties. On the contrary, when the objects to be classified may have some distinct numerical properties of a few class patterns, then training probabilistic neural networks on long pattern matrices is more efficient ensuring noticeably higher accuracy. The smooth training method appears inefficient in improving the performance.

Downloads

Issue

Section

Радіотехніка і телекомунікації