Bibliography [Ahmed 1998] M.S. Ahmed and M.A. Al-Djani. Neural regulator design. Neural Networks, Vol. 11, pp. 1695-1709, [Chjan 1961] Zh.-V.Chjan. A problem of optimal system synthesis through the maximum principle. Automatization and telemechanics, Vol. XXII, No.1, 1961, pp. 8-[Haykin 1999] S. Haykin. Neural Networks: A Comprehensive Foundation., 1999, 2nd ed, Macmillan College Publishing Company, New York [Lewis 2002] F.L. Lewis and M. Abu-Khalaf. A Hamilton-Jacobi setup for constrained neural network control. Proceedings of the 2003 IEEE International Symposium on Intelligent Control Houston, Texas *October 5-8. [Pontryagin 1983] L.S. Pontryagin, V.G. Boltyanskiy, R.V. Gamkrelidze and E.F.Mishtenko. Methematical theory of the optimal processes. Nauka, Moskow, 1983, in Russian Authors’ Information Georgi Toshkov – Technical University of Varna, 1, Studentska Str, Varna, 9010, Bulgaria, е-mail: g_toshkov2006@abv.bg Daniela Toshkova – Technical University of Varna, 1, Studentska Str, Varna, 9010, Bulgaria, е-mail: daniela_toshkova@abv.bg Todorka Kovacheva – Economical University of Varna, Kniaz Boris Str, Bulgaria, e-mail: todorka_kovacheva@yahoo.com GENERALIZING OF NEURAL NETS: FUNCTIONAL NETS OF SPECIAL TYPE Volodymyr Donchenko, Mykola Kirichenko, Yuriy Krivonos Abstract: Special generalizing for the artificial neural nets: so called RFT – FN – is under discussion in the report.

Such refinement touch upon the constituent elements for the conception of artificial neural network, namely, the choice of main primary functional elements in the net, the way to connect them(topology) and the structure of the net as a whole. As to the last, the structure of the functional net proposed is determined dynamically just in the constructing the net by itself by the special recurrent procedure. The number of newly joining primary functional elements, the topology of its connecting and tuning of the primary elements is the content of the each recurrent step. The procedure is terminated under fulfilling “natural” criteria relating residuals for example. The functional proposed can be used in solving the approximation problem for the functions, represented by its observations, for classifying and clustering, pattern recognition, etc. Recurrent procedure provide for the versatile optimizing possibilities: as on the each step of the procedure and wholly: by the choice of the newly joining elements, topology, by the affine transformations if input and intermediate coordinate as well as by its nonlinear coordinate wise transformations. All considerations are essentially based, constructively and evidently represented by the means of the Generalized Inverse.

Neural and Growing Networks Introduction Artificial neurons nets are the technological elements, used in various applications ([Amit, 2002], [Veelenturf, 1995] for example) especially under model uncertainty. It may be approximation problem for the functions represented by its observations, the task of the control, classification problem and so on.

The power of the artificial neural nets (ArtNN) substantially specified by the using virtually the composition of the functions [Donchenko, Kirichenko, Serbaev, 2004]. The might of this instrument was certified by [Kolmogorov, 1966] and [Arnold, 1967].

But some faults of the ArtNN are obvious, namely, the constraints on the primary functional elements represented by neurons, constraints on topology, constraints on the optimizations, properly in the appearance of the Back Propagation.

In the report the attempt is undertaking to extend the ArtNN up to Functional nets taking as the primary the special functional element – so called ERRT: elementary recursive regressive transformation, spreading the topology and introducing “natural” optimizing parameter.

The fulcrum for the implementing this attempt arises from the theory of Generalized Inverse Matrixes (for ex. [Алберт, 1976]) developed by one of the authors and represented in [Кириченко, Лепеха, 2002].

The approach proposed is realized in the conception of so called Recursive nonlinear Functional Transformation - Functional Net (RFT-FN. The idea of such type transformations in the variant of inverse recursion was proposed in [Кириченко, Крак, Полищук, 2004]], focused on the optimizing procedure for ERRT. We represent here another variant of RFT - FN – the variant of so called inverse recursion, introduced and discussed in the paper [Donchenko, Kirichenko, 2005] by the authors. As it has been already noticed earlier, RFT – FN embodies the main ideas of the classical ArtNN: using the composition of standard primary functional elements (artificial neurons – a.n.) connected according some topology for constructing a complex object. The primary elements of the composition: a.n. – represents mathematically comparatively simple function: a composition of linear function – linear functional for input multi- dimensional variable, – and simple standard scalar function. Conception of the RFT-FN leaves the main idea of ArtNN unchangeable: that the idea of constructing a complex objects through a composition of standard simple ones. But it is proposed and implemented some substantial refinements. The main of them are actually in the next:

• Expanding the domain of the feasible functions for the basis standard elements and introducing the special procedure for the adjusting for the newly connecting ERRT;

Introducing the special recursive dynamic procedure for the constructing the RFT-FN. The term “dynamic” means the nor a priory structure is fixed for the RFT-FN, but it determines exclusively by the quality of the constructions, characterized by some “natural” functional of the quality, the residuals in the approximation for example. So the termination of the recursive procedure of the constructing the RFT-FN determines dynamically in the in the course of the procedure by itself.

• Expanding the variants of the feasible connections for the newly connecting elements and its number.

There some more additional enhancements for the ArtNN, proposed and implemented in the conception of the RFT – FN. These are: coordinate wise nonlinear transformations of the input and intermediate inner coordinate along with the linear transformations of them.

All the refinements make it possible to optimize the constructing of the RFT-FN on the each recursive step and wholly by the next ways:

• By the choice of the feasible functions for the ERRT, by the choice of linear transformations of its coordinates and coordinate wise nonlinear transformations of the coordinate for the input variables for the newly connected ERRT: single or a number of them;

• By the choice of a topology for newly connected elements. There are three mains types of the connections for the newly connecting elements: parallel by input (parinput), parallel by output (paroutput) or sequential (seq).

The implementation of the RFT–FN conception will be demonstrated on the approximation problem for the functions, represented by its observations. Draw the attention of the reader that this problem is represented widely in the publications as in deterministic as well in statistical enunciations. As regarding the last, we refer to [Линник, 1962], [Вапник, 1979], [Ивахненко, 1969].

XII-th International Conference "Knowledge - Dialogue - Solution" 1. General Conception of the Functional Net: RFT -FN The RFT-FN constructing procedure conclude in joining the recurrently EERT to RFT-FN already has been constructed during previous steps in compliance with one of the three certain types of connecting, These types will be denoted as parinput, paroutput and seq. The connecting types correspond to natural transformations of input signal: parallel by input or output, and also sequential.

1.1. Description of the Primary Functional Elements: ERRT The basic constructive element for RFT-FN is the ERRP-element [9], which is defined as a map from Rn-1 in Rm designed according to the next form:

x y= A+uC, (1) It approximates the dependence, represented by the learning sample ( 0 (0) (0) (x10), y1 ),...,(xM, yM ), xi(0) Rn-1, yi(0) Rm, i = 1, M, where:

C is (nn))-matrix, defining an affine map between Rn-1 and Rm, fixed when synthesizing the ERRT;

u – coordinate-wise nonlinear map from Rn in Rn ; each of nonlinear real functions ui, i = 1,n, transforming the coordinates, belongs to finite set of the functions. This set is fixed, but open for extension. The set include also identity function. We will consider each of such functions to be smoothly enough; one chose the nonlinear map to minimize the discrepancy between input and output on the learning sample;

A+– trace-norm minimal solution of the next matrix equation AX C =Y, (2) u xi(0) in which X C -matrix assembled from vectors-columns u (C ) = u (zi(0)) u and matrix Y – from yi(0),i = 1, M 1.2. Recursive Procedure and Topology of Connections in RFT – FN Constructing Composition-recursion in function-building procedure in the proposed below variant of direct recursion will be considered in generalized version. In such version the number km of newly joined ERRT may be more, than one.

N Total number of ERRP used will be denoted by T: T =, N being the number of recursive calls of the km m=procedure.

Direct recursion represented by the next figures and corresponding equations depending on type of the joining.

• Parinput (see fig.1) The input-output equations represented such type of topology in the recursive procedure are tot the next form:

x(i + j) = A+i+ j -1u (Ci+ j -1x(i)), i+ j-(i + j) = (i + j -1) + A+i+ j -1u (Ci+ j -1 x(i)), i+ j-(3) m i =, j = 1,km+kl l=Neural and Growing Networks • Paroutput (see fig. 2) Paroutput topology in the recurrent step is represented by the chart, presented in fig.2 and by the next inputoutput equations:

kl l=(i) (i) (0) (0) RFTm RFTm x(0) x(0) x(i) m m x(i) (i = ) (i = kl ) (i +1) (i +1) kl l=1 l=(i + 2) (i + 2) x(i+1) x(i+1) Ci,Ai,ui Ci,Ai,ui Ci+1,Ai+1,ui+1 Ci+1,Ai+1,ui+x(i+2) x(i+2)..

..

x(i+k) x(i+k) (i + k) (i + k + 1) Ci+k,Ai+k,ui+k Ci+k,Ai+k,ui+k x(i+k) x(i+k+1) Fig. 1. Parinput Fig. 2. Paroutput And at last for sequential type we have correspondingly:

• Seq (0) RFTm m x(0) (i +1) (i = k ) x(i) l l=Ci+1,Ai+1,ui+...

x(i) x(i+1) (i + k) (i + k +1)... Ci+k,Ai+k,ui+k x(i+k) x(i+k+1) x(i + j) = A+i + j -1u (Ci+ j -1x(i + j -1)), i+ j-(i + j) = (i) + A+i+ j-1u (Ci+ j-1 x(i + j -1), i+ j-(5) m i =, j = 1,km+kl l=XII-th International Conference "Knowledge - Dialogue - Solution" 2. Special Class of Beam Dynamics with Delay The optimization for RFT – FN as it is follows from (3)-(5 )is reduced virtually to solving the optimization problem for the beam dynamics of special type, determined below. Namely, we will introduce and consider two classes of special discrete dynamic systems with delay named below simple and combined. Classical results about conjugate systems and Hamilton functions will be extended on the systems introduced as well as the results about functional differentiating respectively controls.

2.1. Special Class of Beam Dynamics with Delay: Basic Definitions These two types of beam dynamics: with simple delay and combined – are defined in the next way:

x( j +1) = f (x( j), x( j - s( j)),u( j), j) ; j = 0, N -1; (7) ( (0) set of the initial states Ini={x10),..., xM }: x(0) = x(0) Ini;

• functional on the set of the trajectories I (U ) = (x(N)), (8) x(0)Ini delay function s( j) {2,..., j}, s(0)=0, s( j) {2,..., j}, j = 0, N -1.

Simple systems are defined by the collection of the functions f(z, u, j), j = 0, N -1 and combined ones – by the f(z, v, u, j), j = 0, N -1.

2.2. Conjugate Systems and Hamilton Function Given the system dynamics with delay: simple or combined – define the conjugate systems and the Hamilton functions depending on the type of the delay beam dynamics.

Simple delay:

• Conjugate system p(k), k = N,p(N) = -gradx( N )(x(N)), p(k) = {pT ( j +1) f (x( j),u( j), j)}, gradx(k ) jJ (k ) J(k) = {j : j - s( j) = k, j k}, k = N -1,0 ;

• Hamilton function:

H ( p(k +1), x(k - s(k)),u(k),k) = pT (k +1) f (x(k - s(k)),u(k),k), k = N -1,0.

Combined delay:

• Conjugate system p(k), k = N,p(N) = -gradx( N )(x(N)), p(k) = gradz{pT ( j +1) f (x(k), x(k - s(k)),u( j), j) +, grad { pT ( j + 1) f ( x(k ), x( j), u ( j), j)} v jJ ( k ) J(k) is the same as for simple systems, k = N -1,0 ;

• Hamilton function:

H ( p(k +1), x(k), x(k - s(k)),u(k),k) = pT (k +1) f (x(k), x(k - s(k)),u(k),k).

Neural and Growing Networks 2.3. Gradient in Beam Dynamics with Delay The classical results take place for the beam dynamics with delay within the classical assumptions as to f (z,u, j) or f (z,v,u, j), j = 0, N -1 and. These results are captured in the next two theorems.

Theorem 1. For the simple delay beam dynamics gradients respectively controls are represented by the next relations M (i ) gradu(k )I(U ) = - gradu(k ) ( p(i)(k +1)x(i)(k),u(k),k), k = 0, N -1.

H i=Theorem 2. For the combined delay beam dynamics gradients respectively controls are represented by the next relations M gradu(k )I(U ) = - H(i)(p(i)(k +1),x(i)(k),x(i)(k -s(k),u(k),k)), k = 0, N -1.

grad u(k) i=Index i: i = 1, M corresponds to trajectories with initial states xi(0) Ini.

3. Functional Nets and Beam Dynamics with Delay Combined delay beam dynamics are very important regarding their role in representation of RFT-FN constructions.

Theorem 3. RFT-FN – predictor with the direct N recursions in using km,m = 1, N ERRT respectively can be N represented by the combined delay beam dynamics on the time interval 0,T, T =. The elements of k l l=such representations are constructive, depending on the type of the joining. The quality functional of the system is of the next form M ( I (C) = yk0) - z2(T ) ||2, || k=where z2(T ) is one of two output components for the beam dynamics.

4. Functional Nets Optimal Design Theorem 3 enables to choose optimally C0,C1,...,CT -1 for RFT – FN.

Материалы этого сайта размещены для ознакомления, все права принадлежат их авторам.
Если Вы не согласны с тем, что Ваш материал размещён на этом сайте, пожалуйста, напишите нам, мы в течении 1-2 рабочих дней удалим его.