Kernel Online System for Fast Principal Component Analysis and its Adaptive Learning

Authors

  • Yevgeniy Bodyanskiy
  • Anastasiia Deineko
  • Antonina Bondarchuk
  • Maksym Shalamov

DOI:

https://doi.org/10.47839/ijc.20.2.2164

Keywords:

Kernel function, Data compression, Neural system, Hebb-Sanger neural network, Oja neuron

Abstract

An artificial neural system for data compression that sequentially processes linearly nonseparable classes is proposed. The main elements of this system include adjustable radial-basis functions (Epanechnikov’s kernels), an adaptive linear associator learned by a multistep optimal algorithm, and Hebb-Sanger neural network whose nodes are formed by Oja’s neurons. For tuning the modified Oja’s algorithm, additional filtering (in case of noisy data) and tracking (in case of nonstationary data) properties were introduced. The main feature of the proposed system is the ability to work in conditions of significant nonlinearity of the initial data that are sequentially fed to the system and have a non-stationary nature. The effectiveness of the developed approach was confirmed by the experimental results. The proposed kernel online neural system is designed to solve compression and visualization tasks when initial data form linearly nonseparable classes in general problem of Data Stream Mining and Dynamic Data Mining. The main benefit of the proposed approach is high speed and ability to process data whose characteristics are changed in time.

References

I. D. Bau, L. D. Trefethen, Numerical Linear Algebra, Philadelphia: Society for Industrial and Applied Mathematics, 1997, https://doi.org/10.1137/1.9780898719574.

M. Scholz, M. Fraunholz, J. Selbig, “Nonlinear principal component analysis: Neural network models and applications,” LNCSE 58, Springer, 2007, https://doi.org/10.1007/978-3-540-73750-6_2.

K. Karhunen, Kari, “Uber lineare Methoden in der Wahrscheinlichkeitsrechnung,” Ann. Acad. Sci. Fennicae. Ser. A. I. Math.-Phys., no. 37, pp. 1- 79, 1947. (in German)

B. Simon, Functional Integration and Quantum Physics, Academic Press, 1979.

E. Oja, “A simplified neuron model as a principal component analyzer,” Journal of Math. Biology, vol. 15, pp. 267-273, 1982, https://doi.org/10.1007/BF00275687.

E. Oja, “Neural networks, principal components, and subspaces,” International Journal of Neural Systems, vol. 1, pp. 61-68, 1989, https://doi.org/10.1142/S0129065789000475.

R. Xu, D. C. Wunsch, Clustering, IEEE Press Series on Computational Intelligence, Hoboken, NJ: John Wiley & Sons, Inc., 2009, 370 p.

T. M. Cover, “Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition,” IEEE Trans. on Electronic Computers, vol. 14, pp. 326-334, 1965, https://doi.org/10.1109/PGEC.1965.264137.

Ye. V. Bodyanskiy, A. O. Deineko, F. M. Eze, “Kernel fuzzy Kohonen’s clustering neural network and it’s recursive learning,” Automatic Control and Computer Sciences, vol 52, pp. 166-174, 2018, https://doi.org/10.3103/S0146411618030045.

T. Sanger, “Optimal unsupervised learning in a single layer feedforward neural network,” Neural Networks, vol. 2, issue 6, pp. 459-473, 1989, https://doi.org/10.1016/0893-6080(89)90044-0.

Ye. Bodyanskiy, V. Kolodyazhniy, A. Stephan, “An adaptive learning algorithm for a neuro-fuzzy network,” ed. by B. Reusch Computational Intelligence. Theory and Applications, Berlin Heidelberg: Springer-Verlag, 2001, pp. 68–75, https://doi.org/10.1007/3-540-45493-4_11.

V. A. Epanechnikov, “Nonparametric estimation of multivariate probability density,” Probability Theory and its Application, vol. 14, issue 1, pp. 156-161, 1968, https://doi.org/10.1137/1114019.

A. Cichocki, R. Unbehauen, Neural Networks for Optimization and Signal Processing, Stuttgart, Teubner, 1993.

H. Yin, “Learning nonlinear principal manifolds by self-organising maps,” In: Gorban A.N., Kégl B., Wunsch D.C., Zinovyev A.Y. (eds) Principal Manifolds for Data Visualization and Dimension Reduction. Lecture Notes in Computational Science and Engine, vol. 58, Springer, Berlin, Heidelberg, 2008, pp. 68-95, https://doi.org/10.1007/978-3-540-73750-6_3.

S. Haykin, Neural Networks: A Comprehensive Foundation, New Jersey: Prentice-Hall, 1999.

K.-L. Du, M. N. S. Swamy, Neural Networks and Statistical Learning, London: Springer-Verlag, 2014, 824 p.

R. Kruse, C. Borgelt, F. Klawonn, C. Moewes, M. Steinbrecher, P. Held, Computational Intelligence, Berlin: Springer, 2013, 488 p. https://doi.org/10.1007/978-1-4471-5013-8.

F. M. Ham, I. Kostanic, Principles of Neurocomputing for Science and Engineering, N.Y.: Mc Graw-Hill, Inc., 2001, 642 p.

L. Rutkowski, Computational Intelligence. Methods and Techniques, Berlin, Heidelberg: Springer-Verlag, 2008, 514 p.

L. Ljung, System Identification: Theory for User, N.Y.: Hall, 1999, 519 p.

B. Schölkopf, A. J. Smola, Learning with Kernels, MIT Press, 2002.

Downloads

Published

2021-06-28

How to Cite

Bodyanskiy, Y., Deineko, A., Bondarchuk, A., & Shalamov, M. (2021). Kernel Online System for Fast Principal Component Analysis and its Adaptive Learning. International Journal of Computing, 20(2), 175-180. https://doi.org/10.47839/ijc.20.2.2164

Issue

Section

Articles