Comparison of four support-vector based function approximators

Bas J. de Kruif, Theo J.A. de Vries

    Research output: Contribution to conferencePaper

    61 Downloads (Pure)


    One of the uses of the support vector machine (SVM), as introduced in V.N. Vapnik (2000), is as a function approximator. The SVM and approximators based on it, approximate a relation in data by applying interpolation between so-called support vectors, being a limited number of samples that have been selected from this data. Several support-vector based function approximators are compared in this research. The comparison focuses on the following subjects: i) how many support vectors are involved in achieving a certain approximation accuracy, ii) how well are noisy training samples handled, and iii) how is ambiguous training data dealt with. The comparison shows that the so-called key sample machine (KSM) outperforms the other schemes, specifically on aspects i and ii. The distinctive features that explain this, are the quadratic cost function and using all the training data to train the limited parameters.
    Original languageEnglish
    Publication statusPublished - 2004
    Event2004 IEEE International Joint Conference on Neural Networks, IJCNN 2004 - Budapest, Hungary
    Duration: 25 Jul 200429 Jul 2004


    Conference2004 IEEE International Joint Conference on Neural Networks, IJCNN 2004
    Abbreviated titleIJCNN


    Dive into the research topics of 'Comparison of four support-vector based function approximators'. Together they form a unique fingerprint.

    Cite this