One of the uses of the support vector machine (SVM), as introduced in V.N. Vapnik (2000), is as a function approximator. The SVM and approximators based on it, approximate a relation in data by applying interpolation between so-called support vectors, being a limited number of samples that have been selected from this data. Several support-vector based function approximators are compared in this research. The comparison focuses on the following subjects: i) how many support vectors are involved in achieving a certain approximation accuracy, ii) how well are noisy training samples handled, and iii) how is ambiguous training data dealt with. The comparison shows that the so-called key sample machine (KSM) outperforms the other schemes, specifically on aspects i and ii. The distinctive features that explain this, are the quadratic cost function and using all the training data to train the limited parameters.
|Publication status||Published - 2004|
|Event||2004 IEEE International Joint Conference on Neural Networks, IJCNN 2004 - Budapest, Hungary|
Duration: 25 Jul 2004 → 29 Jul 2004
|Conference||2004 IEEE International Joint Conference on Neural Networks, IJCNN 2004|
|Period||25/07/04 → 29/07/04|