Skip to main content Skip to section menu

U.W. Bangor - School of Informatics - Mathematics Preprints 1997

Pattern Recognition and Fuzzy Systems


97.22 : KUNCHEVA, L.I. and BEZDEK, J.C.

A fuzzy generalized nearest prototype classifier

Abstract:

We propose a fuzzy Generalized Nearest Prototype Classifier (GNPC). The classification decision is crisp and is based on aggregation of similarities between the unlabeled object x and a set of prototypes labeled with "soft" (fuzzy) labels in the classes. We show that GNPC contains as special cases
  • the nearest neighbor classifier;
  • the minimum-distance classifier;
  • a type of radial basis function (RBF) networks;
  • a type of fuzzy if-then systems for classification.

We suggest an algorithm to design a simple GNPC and illustrate it on the IRIS data.

Published in:

Proc. 7th IFSA World Congress, Prague, Czech III (1997) 217-222

97.23 : KUNCHEVA, L.I. and BEZDEK, J.C.

Nearest prototype classification: Clustering, genetic algorithms or random search?

Abstract:

We address three questions related to the nearest prototype classifier (NPC): (1) Is it better to design the prototype set artificially or to select it as a subset of the given labeled data? (2) How can we trade classification accuracy for a reduction in the number of prototypes? (3) How good is pure random search for selection of prototypes from the data? We compare the resubstitution performance of the NPC on the IRIS data where the prototypes are either extracted by "replacement" (R-prototypes) or "selected" (S-prototypes). Results for R-prototypes are taken from a previous study and are contrasted with S-prototypes results obtained by a genetic algorithm (GA) or by random search (RS). The best results reached by both algorithms (GA and RS) are 2 errors with sets of 3 S-prototypes. This compares favorably to the best result found with R-prototypes, viz. 3 errors with 5 R-prototypes. Based on our results, we recommend GA selection for the nearest prototype classifier.

Published in:

IEEE Transactions on Systems, Man, and Cybernetics C28 (1998) 160-164.

97.24 : KUNCHEVA, L.I.

An application of OWA operators to the aggregation of multiple classification decisions

Abstract:

The paper considers combining of classification decisions of multiple classifiers. The individual classifier decisions are treated as degrees of membership assigned by the classifier to the object that is to be labeled. We compare the Ordered Weighted Averaging (OWA) aggregation operators to simple voting, linear and logarithmic pool. In general, classification accuracy at the second level does not vary significantly. It was observed that OWA operators tend to generalize better than their competitors when the individual classifiers are overtrained. Our experimental illustration uses two benchmark data sets: the two intertwined spirals data and the "heart" data from the database PROBEN1 at ftp://ftp.ira.uka.de/pub/neuron/.

Published in:

R.R. Yager and J, Kacprzyk (Eds.)
The Ordered Weighted Averaging operators. Theory and Applications Kluwer Academic Publishers, USA (1997) 330-343.

97.25 : KUNCHEVA, L.I.

Fitness functions in editing k-NN reference set by genetic algorithms

Abstract:

We study Genetic Algorithms (GAs) for selection of reference set for the k-nearest neighbor classifier. Here we look at different fitness functions consisting of two terms: an "accuracy" term and a "penalty" term forcing the GA to keep the cardinality of the selected set small (within a predefined limit). Compared with other editing techniques (Wilson's method and MULTIEDIT) GA selected much smaller sets of similar or higher classification accuracy. For illustration we used the IRIS data and data set "heart" from the database PROBEN1 at ftp://ftp.ira.uka.de/pub/neuron/.

Published in:

Pattern Recognition 30 (1997) 1041-1049.

97.26 : KUNCHEVA, L.I.

Initializing of an RBF network by a genetic algorithm

Abstract:

We use a genetic algorithm (GA) for selecting the initial seed points (prototypes, kernels) for a Radial Basis Function (RBF) classifier. The chromosome is directly mapped onto the training set and represents a subset of it: the chromosome contains 1 at the i-th position if the respective element of the data set is included, and 0, otherwise. The the set of seed points is obtained from the winning chromosome in the last GA generation. Then we use a simulated annealing training scheme for the RBF network. Once we have fixed the centers, we train the RBF shape parameters and the weights to the output. Experimental results with the IRIS data and the two-spirals data are given.

Published in:

Neurocomputing 14 (1997) 273-288.

97.31 : KUNCHEVA, L.I. and BEZDEK, J.C.

Selection of cluster prototypes from data by a genetic algorithm

Published in:

Proc. 5th European Congress on Intelligent Techniques and Soft Computing, Aachen, Germany (1997) 1683-1688

Site footer