A combination of curve fitting algorithms to collect a few training samples for function approximation

Volume 17, Issue 3, pp 355-364

Publication Date: 2017-07-17

http://dx.doi.org/10.22436/jmcs.017.03.02

Authors

Saeed Parsa - Department of Computer Engineering, Iran University of Science and Technology, Narmak, Tehran, 16844, Iran.
Mohammad Hadi Alaeiyan - Department of Computer Engineering, Iran University of Science and Technology, Narmak, Tehran, 16844, Iran.

Abstract

The aim of this paper is to approximate the numerical result of executing a program/function with a number of input parameters and a single output value with a small number of training points. Curve fitting methods are preferred to nondeterministic methods such as neural network and fuzzing system methods, because they can provide relatively more accurate results with the less amount of member in the training dataset. However, curve fitting methods themselves are most often function specific and do not provide a general solution to the problem. These methods are most often targeted at fitting specific functions to their training dataset. To provide a general curve fitting method, in this paper, the use of a combination of Lagrange, Spline, and trigonometric interpolation methods are suggested. The Lagrange method fits polynomial functions of degree N to its training values. In order to improve the resultant fitted polynomial our combinatorial method combines Lagrange with the polynomial resulted from the Spline method. If the absolute error of the actual value and the predicted value of a function are not desired, the trigonometric interpolation methods that fit trigonometric functions can be applied. Our experiments with a number of benchmark examples demonstrate the relatively high accuracy of our combinational fitting method.

Keywords

Output function approximation, black box approximation, curve fitting, linear function approximation, nonlinear function approximation.

References

[1] L. E. Aik, Y. Jayakumar, A study of neuro-fuzzy system in approximation-based problems, Mat., 24 (2008), 113–130.
[2] R. Andonie, L. Fabry-Asztalos, C. B. Abdul-Wahid, S. Abdul-Wahid, G. I. Barker, L. C. Magill, Fuzzy ARTMAP prediction of biological activities for potential HIV-1 protease inhibitors using a small molecular data set, IEEE/ACM Trans. Comput. Biol. Bioinf., 8 (2011), 80–93.
[3] K. E. Atkinson, An introduction to numerical analysis, Second edition, John Wiley & Sons, Inc., New York, (1989).
[4] P. Benk¨ o, G. K´ os, T. V´arady, L. Andor, R. Martin, Constrained fitting in reverse engineering, Comput. Aided Geom. Design, 19 (2002), 173–205.
[5] J. P. Berrut, L. N. Trefethen, Barycentric Lagrange interpolation, SIAM Rev., 46 (2004), 501–517.
[6] G. Bloch, F. Lauer, G. Colin, Y. Chamaillard, Support vector regression from simulation data and few experimental samples, Inform. Sci., 178 (2008), 3813–3827.
[7] G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems, 2 (1989), 303– 314.
[8] J. A. Dickerson, B. Kosko, Fuzzy function approximation with ellipsoidal rules, IEEE Trans. Systems Man Cybernet., 26 (1996), 542–560.
[9] M. Gori, F. Scarselli, Are multilayer perceptrons adequate for pattern recognition and verification?, IEEE Trans. Pattern Anal. Mach. Intell., 20 (1998), 1121–1132.
[10] J. W. Hines, A logarithmic neural network architecture for unbounded non-linear function approximation, IEEE International Conference on Neural Networks, Washington, DC, USA, 2 (1996), 1245–1250.
[11] https : //en.wikipedia.org/wiki/Linearinterpolation.
[12] C.-F. Huang, C. Moraga, A diffusion-neural-network for learning from small samples, Internat. J. Approx. Reason., 35 (2004), 137–161.
[13] F. Lauer, G. Bloch, Incorporating prior knowledge in support vector regression, Mach. Learn., 70 (2008), 89–118.
[14] Y. Mizukami, Y. Wakasa, K. Tanaka, A proposal of neural network architecture for non-linear function approximation, Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK, 4 (2004), 605–608.
[15] K. Rodr´─▒guez-V´azquez, C. Oliver-Morales, Multi-branches genetic programming as a tool for function approximation, Genetic and Evolutionary Computation Conference, Seattle, WA, USA, (2004), 719–721.
[16] M. I. Shapiai, Z. Ibrahim, M. Khalid, Enhanced weighted Kernel regression with prior knowledge using robot manipulator problem as a case study, Procedia Eng., 41 (2012), 82–89.
[17] M. I. Shapiai, Z. Ibrahim, M. Khalid, L. W. Jau, S.-C. Ong, V. Pavlovich, Solving small sample recipe generation problem with hybrid WKRCF-PSO, Int. J. New Comput. Archit. Appl., 1 (2011), 810–820.
[18] M. I. Shapiai, Z. Ibrahim, M. Khalid, L. W. Jau, V. Pavlovich, A non-linear function approximation from small samples based on Nadaraya-Watson kernel regression, 2nd International Conference on Computational Intelligence, Communication Systems and Networks, Liverpool, UK, (2010), 28–32.
[19] M. I. Shapiai, Z. Ibrahim, M. Khalid, L. W. Jau, V. Pavlovic, J. Watada, Function and surface approximation based on enhanced kernel regression for small sample sets, Int. J. Innov. Comput. I., 7 (2011), 5947–5960.
[20] T.-Y. Sun, S.-J. Tsai, C.-H. Tsai, C.-L. Huo, C.-C. Liu, Nonlinear function approximation based on Least Wilcoxon Takagi- Sugeno fuzzy model, Eighth International Conference on Intelligent Systems Design and Applications, Kaohsiung, Taiwan, 1 (2008), 312–317.
[21] T.-I. Tsai, D.-C. Li, Approximate modeling for high order non-linear functions using small sample sets, Expert Syst. Appl., 34 (2008), 564–569.
[22] G. S. Watson, Smooth regression analysis, Sankhy¯a Ser. A, 26 1964, 359–372.
[23] J. Yuan, C.-L. Liu, X.-M. Liu, K.-S.Wang, T. Yu, Incorporating prior model into Gaussian processes regression for WEDM process modeling, Expert Syst. Appl., 36 (2009), 8084–8092.

Downloads

XML export