Pages:     | 1 |   ...   | 16 | 17 || 19 | 20 |   ...   | 82 |

Fig. 5 Forecasting of behavior of chaotic dynamic system with the help of the double wavelet-neuron Table 1 shows results of forecasting process on the basis of the double wavelet-neuron compared the results of forecasting process on the basis of standard wavelet-neuron with the gradient learning algorithm, radial basis neural network and multilayer perceptron.

XII-th International Conference "Knowledge - Dialogue - Solution" Table 1 - The results of time series forecasting Criteria Number of adjustable Neural network/ Learning algorithm parameters RMSE Wegstrecke Trefferquote Double wavelet neuron / Proposed learning algorithm of parameters of 105 0.0078 1 99.8% wavelet-synapses (11) (15) Wavelet-neuron / Gradient learning algorithm of parameters of wavelet- 100 0.0101 0.98 98.8% synapses with constant step Radial basis neural network / RLSE 100 0.5774 0.4883 55,2% Multilayer perceptron / Gradient 115 0.6132 0.5882 75,5 % learning algorithm Thus as observed from experimental results the proposed double wavelet-neuron with the learning algorithm (11), (15) having the same number of adjustable parameters ensures the best quality of forecast and high learning speed in comparison with traditional architectures.

Conclusions The double wavelet-neuron architecture and its learning algorithm which allows to adjust all parameters of network are proposed. The algorithm is very simple in the way of its numerical implementation, possesses high rate of convergence and additional smoothing and approximation properties.

Bibliography [1]. Chui C. K. An Introduction to Wavelets. New York: Academic. 1992, 264 p.

[2]. Daubechies I. Ten Lectures on Wavelets. Philadelphia, PA: SIAM. 1992, 228 p.

[3]. Meyer Y. Wavelets: Algorithms and Applications. Philadelphia, PA: SIAM. 1993, 133 p.

[4]. Lekutai G., van Landingham H.F. Self-tuning control of nonlinear systems using neural network adaptive frame wavelets. Proc. IEEE Int. Conf. on Systems, Man and Cybernetics. Piscataway, N.J. 2, 1997, P. 1017-1022.

[5]. Bodyanskiy Ye., Lamonova N., Pliss I., Vynokurova O. An adaptive learning algorithm for a wavelet neural network.

Blackwell Synergy: Expert Systems. 22 (5), 2005, P. 235-240.

[6]. Bodyanskiy Ye., Kolodyazhniy V. Pliss I., Vynokurova O. Learning wavelet neuron based on the RASP-function.

Radio Electronics. Computer Science. Control. 1., 2004, P. 118-122.

[7]. .., .., .. - . C. . 12- 2005, .3. : - Ȼ., 2005, C. 40-[8]. .., .. . . . 3- . - (MPZIS-2005). : , 2005, C. 14-15.

[9]. Billings S. A., Wei H.-L. A new class of wavelet networks for nonlinear system identification. IEEE Trans. on Neural Networks. 16 (4), 2005, P. 862-874.

[10]. Szu H. H., Telfer B., Kadambe S. Neural network adaptive wavelets for signal representation and classification. Opt.

Eng. 31, 1992, P. 19071916.

[11]. Zhang Q. H., Benveniste A. Wavelet networks. IEEE Trans. on Neural Networks. 3 (6), 1992, P. 889898.

[12]. Dickhaus H., Heinrich H. Classifying biosignals with wavelet networks. IEEE Eng. Med. Biol. Mag. 15(5), 1996, P. 103111.

[13]. Cao L. Y., Hong Y. G., Fang H. P., He G. W. Predicting chaotic time series with wavelet networks. Phys. D. 85, 1995, P. 225238.

[14]. Oussar Y., Dreyfus G. Initialization by selection for wavelet network training. Neurocomputing. 34, 2000, P. 131143.

[15]. Zhang J., Walter G. G., Miao Y., Lee W. N. W. Wavelet neural networks for function learning. IEEE Trans. on Signal Process. 43(6), 1995, P. 14851497.

Neural and Growing Networks [16]. Zhang Q. H. Using wavelet network in nonparametric estimation. IEEE Trans. on Neural Networks. 8(2), 1997, P. 227236.

[17]. Casdagli M. Nonlinear prediction of chaotic time series. Phys. D. 35, 1989, P. 335356.

[18]. Soltani S. On the use of wavelet decomposition for time series prediction. Neurocomputing. 48, 2002, P. 267277.

[19]. Yamakawa T., Uchino E., Samatu T. Wavelet neural networks employing over-complete number of compactly supported non-orthogonal wavelets and their applications. IEEE Int. Conf. on Neural Networks, Orlando, USA., 1994, P. 1391-1396.

[20]. Yamakawa T., Uchino E., Samatu T. The wavelet network using convex wavelets and its application to modeling dynamical systems. The Trans. on the IEICE. J79-A. 12, 1996, P. 2046-2053.

[21]. Yamakawa T. A novel nonlinear synapse neuron model guaranteeing a global minimum Wavelet neuron. Proc. th IEEE Int. Symp. On Multiple-Valued Logic. Fukuoka, Japan: IEEE Comp. Soc., 1998, P.335-[22]. Baumann M. Nutzung neuronaler Netze zur Prognose von Aktienkursen. Report Nr. 2/96, TU Ilmenau., 1996, 113 p.

Authors' Information Bodyanskiy Yevgeniy Doctor of Technical Sciences, Professor of Artificial Intelligence Department and Scientific Head of the Control Systems Research Laboratory, Kharkiv National University of Radio Electronic, Lenina av. 14, Kharkiv, Ukraine 61166, Tel +380577021890, e-mail: bodya@kture.kharkov.ua Lamonova Nataliya Candidate of Technical Sciences (equivalent Ph.D.), Senior Research Assistant of Control Systems Research Laboratory, Kharkiv National University of Radio Electronic, Lenina av. 14, Kharkiv, Ukraine 61166, Tel +380577021890, e-mail: webmaster@natashka.de Vynokurova Olena Candidate of Technical Sciences (equivalent Ph.D.), Senior Research Assistant of Control Systems Research Laboratory Kharkiv National University of Radio Electronic, Lenina av. 14, Kharkiv, Ukraine, 61166, Tel +380577021890, e-mail: vinokurova@kture.kharkov.ua GROWING NEURAL NETWORKS BASED ON ORTHOGONAL ACTIVATION FUNCTIONS Yevgeniy Bodyanskiy, Irina Pliss, Oleksandr Slipchenko Abstract: In the paper, an ontogenic artificial neural network (ANNs) is proposed. The network uses orthogonal activation functions that allow significant reducing of computational complexity. Another advantage is numerical stability, because the system of activation functions is linearly independent by definition. A learning procedure for proposed ANN with guaranteed convergence to the global minimum of error function in the parameter space is developed. An algorithm for structure network structure adaptation is proposed. The algorithm allows adding or deleting a node in real-time without retraining of the network. Simulation results confirm the efficiency of the proposed approach.

Keywords: ontogenic artificial neural network, orthogonal activation functions, time-series foreccasting.

Introduction Artificial neural networks (ANNs) are widely applied to solving a variety of problems such as information processing, data analysis, system identification, control etc. under structural and parametric uncertainty [1, 2].

One of the most attractive properties of ANNs is the possibility to adapt their behavior to the changing characteristics of the modeled system. By adaptivity we understand not only the adjustment of parameters (synaptic weights), but also the possibility to adjust the architecture (the number of nodes). The goal of the present paper is the development of an algorithm for structural and synaptic adaptation of ANNs for nonlinear system modeling, capable of online operation, i.e. sequential information processing without re-training after structure modification.

XII-th International Conference "Knowledge - Dialogue - Solution" The problem of optimization of neural network architecture has been studied for quite a long time. The algorithms that start their operation with simple architecture and gradually add new nodes during learning, are called constructive algorithms. In contrast, destructive algorithms start their operation with an initially redundant network, and simplify it as learning proceeds. This process is called pruning.

Radial basis function network (RBFN) is one of the most popular neural network architectures [3]. One of the first constructive algorithms for such networks was proposed by Platt and named resource allocation [4]. By present time, a number of modifications of this procedure is known [5, 6]. One of the most known is the cascadecorrelation architecture developed by Fahlman and Lebiere [7].

Among the destructive algorithms, the most popular are the optimal brain damage [8] and optimal brain surgeon [9]. In these methods, the significance of a node or a connection between nodes is determined by the change in error function that its deletion incurs. For this purpose, the matrix of second derivatives of the optimized function with respect to the tunable parameters is analyzed. Both procedures are quite complex computationally.

Besides that, an essential disadvantage is the need for re-training after the deletion of non-significant nodes.

This, in turn, makes the real-time operation of these algorithms impossible. Other algorithms such as [10] are heuristic and lack universality.

It should be noted that there is no universal and convenient algorithm, which could be used for the manipulation of the number of nodes and suitable for most problems and architectures. Many of the algorithms proposed so far lack theoretical justification as well as the predictability of the results of their application and the ability to operate in real time.

Network Architecture Lets consider the network architecture, that implements the following nonlinear mapping hi n (k) = f(x(k)) = (xi (k)) (1) wji ji i=1 j=where k = 1, 2,... discrete time or ordinal number of sample in training set, wji tunable synaptic weights, ji () j -th activation function for i -th input variable, hi number of activation functions for appropriate input variable, xi (k) value of i -th input signal at time moment k (or for k -th training sample).

n Proposed architecture contains h = tunable parameters and it can be readily seen that the this number is hi i=between the scatter-partitioned and grid-partitioned systems.

We propose the use of orthogonal polynomials of one variable for the basis functions. Particular system of functions can be chosen according to the specificity of the solved problem. If the input data are normalized on the hypercube [-1, 1]n, the system of Legendre polynomials orthogonal on the interval [-1, 1] with weight (x) 1 [17] can be used:

[n / 2] (2n - 2m)! Pn(x) = 2-n (-1)m xn-2m, (2) m!(n - m)!(n - 2m)! m=where [] is the integer part of a number.

Among other possible choices for activation functions we should mention Chebyshev [15, 16] and Hermite [18] polynomials as well as non-sinusoidal orthogonal systems proposed by Haar and Walsh.

Synaptic Adaptation The sum of squared errors will be used as the learning criterion:

pp hi n E(k) = (k) = p)) -ji (xi ( p)))2 (3) e (y( wji p=1 p=1 i=1 j=Neural and Growing Networks For the convenience of further notation, let us re-write the expression for the output of the neural network (1) in the form (k +1) = T (k +1)W (k), (4) where (k) = (11(x(k)),21(x(k)),...,h n(x(k)))T is a (h 1) vector of the values of the basis functions n for the k-th element of the training set (or at the instant k for sequential processing), W (k) = (w1(k),...,wh (k))T is a (h 1) vector of estimates of synaptic weights at the iteration k.

Since the output of the proposed neural network depends on the tuned parameters linearly, we can use the least squares procedure to estimate them. For sequential processing, e.g. in the case of online identification, we can use the recursive least squares method:

T P(k)( y(k +1) -W (k)(k +1))(k +1) W (k +1) = W (k) + 1+T (k +1)P(k)(k +1), (5) P(k +1) = P(k) - P(k)(k +1)T (k +1)P(k).

1+T (k +1)P(k)(k +1) Because of the orthogonality of the basis functions, the matrix P(k) will tend to diagonal form as k.

If the activation functions are orthonormal, P(k) will tend to the unity matrix. Due to this property, the learning procedure will retain numerical stability with the increase of the number of samples in the training sequence.

Structure Adaptation We consider sequential learning that minimizes (3). This leads to the estimate - Wh (k) = Rh (k)Fh (k), (6) -1 -Rh (k -1)(k)(k)T Rh (k -1) -1 - Rh (k) = Rh (k -1) -, (7) -1+(k)T Rh (k -1)(k) Fh(k) = Fh (k -1) +(k)y(k). (8) The use of the recursive least squares (RLS) method and its modifications allows to obtain an accurate and wellinterpretable measure of significance of each function in the mapping (1). This mapping can be considered as an expansion of an unknown reconstructed function in the basis { (.)}. Obviously, if the absolute value of any of ji the coefficients in this expansion is small, then the corresponding function can be excluded from the basis without significant loss of accuracy. The remaining synaptic weights does not need to be retrained if the weight of the excluded node is close to zero. Otherwise, the network should be retrained.

Assume that a vector of synaptic weights Wh (k) of a network comprising h nodes was obtained at the instant k using the formula (6), where the index h determines the number of basis functions (the dimension of (k) ).

Also assume that the absolute value of the considered parameter wh (k) is small, and we want to exclude corresponding unit function from the expansion (1). The assumption about the insignificance of the activation h is not restrictive, because we always can re-number the basis functions. This will result only in the rearrangement of the rows and columns in the matrix Rh(k) and in the change of ordering of the elements of the vector Fh(k). However, the rearrangement of columns and/or rows of a matrix does not influence the subsequent matrix operations.

Taking into account the fact that the matrix Rh(k) is symmetric, we obtain:

-Rh-1(k) h-1(k) Fh-1(k) - Wh (k) = Rh (k)Fh (k) =, (9) T h-1(k) rhh (k) fh (k) XII-th International Conference "Knowledge - Dialogue - Solution" where rij (k) is the element of the i -th row and j -th column of the matrix Rh(k), h-1(k) = (r1h(k),...,rh-1h(k))T = (rh1(k),...,rhh-1(k))T, fi(k) is the i -th element of vector Fh(k).

Pages:     | 1 |   ...   | 16 | 17 || 19 | 20 |   ...   | 82 |

2011 www.dissers.ru -

, .
, , , , 1-2 .