Mathematical recognition theory has long history and the variety of its reality modeling methods is quite wide.

Every research group has its own traditions and usually works in specific area of mathematics. There are two basic approaches which are commonly said to be different. They are functional and algorithmic ones. For example, neural networks approximate output function but their parameters has no appropriate interpretation.

Algorithmic models as for example algorithms of estimates calculating provide interpretable parameters though may have high calculation difficulty. Integration of scientific schools and small groups of “particular specialists” in the framework of joint projects provide possibilities for revealing potentials of different methods and their combinations. Developing of one such integrated approach is connected to the execution of series of INTAS projects by research groups from Russia, Spain, Armenia and some other countries.

Algebraic theory of pattern recognition based upon discrete analysis and algebra [1] is the basic approach which has been being used for 35 years in the Computing Centre of RAS under the direction of academician Yu.I. Zhuravlev. Research activities of the Institute for Informatics and Automation Problems of NAS Armenia lie in the same area of discrete recognition models. Their specific is the use of optimization structures of discrete isoperimetric tasks, discrete topology and hierarchical class searching [2]. Neural network models especially ones with polynomial output and linear activation functions [3] are the main area of interest of the Spanish group. In particular, they research temporal signal delays in recognition tasks. Good results have been achieved in forecasting of stock exchange and similar problems.

Some hybrid methods and applications for pattern recognition have been developed by these groups in the framework of INTAS projects 96-952, 00-367, 00-636 and 03-55-1969. One of them is based on assembling of neural networks and logical correction schemes. The main cause of this research was the idea of creating such pattern recognition and forecasting application which requires minimal human intervention or no intervention at all. It should be possible for the operator with no specific knowledge in mathematics to use that software. Such NNLC (Neural Networks with Logical Correction) application has been developed in the framework of INTAS projects 03-56-182 inno and 03-55-1969 YSF. Now we are proud to say that it has justified our expectations in a great extent. The method has shown high and stable results in many practical tasks.

Knowledge Engineering Further we shall describe general training and recognition scheme for the l-classes task. The notation from [1] will be used. Let the training sample be S1, S2,..., Sm and the testing one S'1, S'2,..., S'q :

Sm +1, Sm +2,..., Sm Ki,i = 1,2,...,l,m0 = 1,ml = m, i-1 i-1 i S'q +1, S'q +2,..., S'q Ki,i =1,2,...,l,q0 =1,ql = q.

i-1 i-1 i For simplicity sake let us also suppose the task is solved without denials.

Finally, let us have N neural networks Aj (S) = (1j (S),2j (S),...,lj (S)) trained for this task. It will give us the following matrix of recognition results:

Algorithm of recognition by the group of neural networks will be designed according to the principle of potential correction [4]. New object will be assigned to the class of maximum estimation which is calculated according to the following formula:

q j i (S) =.

(S't, S),i =1,2,...,l i q - q t =q +j j-j-The variable i (S't,S) is called the potential between S't и S and is calculated as follows:

1, {ij (S) ij (S't ), j = 1,2,..., N,} / N, a) i (S't,S) = otherwise.

0, b) i (S't, S) = {the number of correct inequalities ij (S) ij (S't ), j = 1,2,..., N}.

A-type potential we will call monotonous, b-type one will be called weekly monotonous with monotony parameter, 0 < 1.

Thus, training phase consists of training of N neural networks (with no denials) and consequent calculation of binary matrix ij (S't ). New object S is classified by calculating its binary matrix ij (S) and its lNq lN estimates for each class according to either a-type or b-type potential. As we have already mentioned software realization of the method has been made by means of NNLC application. By the grant system of INTAS organization the NNLC application has been qualified as innovation software.

Acknowledgements The authors are glad to acknowledge support of the following organizations for execution of the described research: INTAS (projects 03-56-182 inno, 04-77-7076, 03-55-1969 YSF), RFBR (projects 05-07-90333, 06-0100492, 05-01-00332). The work has been also supported by the program N 14 of RAS Presidium’s.

Bibliography [1] Zhuravlev Yu.I., On algebraic approach for pattern recognition or classification // Cybernetics problems (Problemy kibernetiki), Nauka, Moscow, 1978, N33, pp. 5-68. (In Russian).

[2] Aslanyan L., Zhuravlev Yu., Logic Separation Principle, Computer Science & Information Technologies Conference // Yerevan, 2001, pp. 151-156.

[3] Luis Mingo, Levon Aslanyan, Juan Castellanos, Miguel Diaz and Vladimir Riazanov // Fourier Neural Networks: An Approach with Sinusoidal Activation Functions. International Journal Information Theories and Applications. Vol. 11.

ISSN: 1310-0513. 2004. Pp. 52-53.

[4] Zuev Yu.A., Method for increasing of classification reliability based upon monotony principle in case of multiple classifiers // Journal of Calculating mathematics and Mathematical Physics, 1981, T.21, N1, pp. 157-167.

Fourth International Conference I.TECH 2006 Authors’ Information L.A. Aslanyan – Institute for Informatics and Automation Problems, NAS Armenia; P.Sevak St. 1, Yerevan-14, Armenia; e-mail: lasl@sci.am L.F. Mingo – Dpto. Organizacin y Estructura de la Informacin, Escuela Universitaria de Informtica, Universidad Politcnica de Madrid; Crta. de Valencia km. 7 - 28031 Madrid, Spain; e-mail: lfmingo@eui.upm.es J.B. Castellanos – Dpto. Inteligencia Artificial, Facultad de Informtica, Universidad Politcnica de Madrid;

Boadilla del Monte – 28660 Madrid, Spain; e-mail: jcastellanos@fi.upm.es V.V. Ryazanov – Department of Mathematical Pattern Recognition and Methods of Combinatorial Analysis, Computing Centre of the Russian Academy of Sciences; 40 Vavilova St., Moscow GSP-1, 119991, Russian Federation; e-mail: rvvccas@mail.ru F.B. Chelnokov – Department of Mathematical Pattern Recognition and Methods of Combinatorial Analysis, Computing Centre of the Russian Academy of Sciences; 40 Vavilova St., Moscow GSP-1, 119991, Russian Federation; e-mail: fchel@mail.ru A.A. Dokukin – Department of Mathematical Pattern Recognition and Methods of Combinatorial Analysis, Computing Centre of the Russian Academy of Sciences; 40 Vavilova St., Moscow GSP-1, 119991, Russian Federation; e-mail: dalex@ccas.ru LOGIC BASED PATTERN RECOGNITION - ONTOLOGY CONTENT (1) Levon Aslanyan, Juan Castellanos Abstract: Pattern recognition (classification) algorithmic models and related structures were considered and discussed since 70s: – one, which is formally related to the similarity treatment and so - to the discrete isoperimetric property, and the second, - logic based and introduced in terms of Reduced Disjunctive Normal Forms of Boolean Functions. A series of properties of structures appearing in Logical Models are listed and interpreted. This brings new knowledge on formalisms and ontology when a logic based hypothesis is the model base for Pattern Recognition (classification).

1. Introduction Pattern Recognition is in reasonable formalization (ontology) of informal relations between objects visible/measurable properties and of object classification by an automatic or a learnable procedure. Among the means of formalization (hypotheses) - metric and logic based ones are the content of series of articles started by the current one. The stage of pattern recognition algorithmic design in 70s dealt with algorithmic models – which are huge parametric structures, combined with diverse optimization tools. Algorithmic Models cover and integrate wide groups of existing algorithms, integrating their definitions, and multiplying their resolution power. Well known example of this kind is algorithmic model of estimation of analogies (AEA) given by Yu. I. Zhuravlev [1]. This model is based indirectly on compactness hypothesis, which is theoretically related to the well known discrete The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be Knowledge Engineering isoperimetric problem (3). The optimization problem of isoperimetry is a separate theoretical issue and its pattern recognition implementations are linked alternatively to the general ideas of potential functions [4]. We present the logical separation (LSA) algorithmic model, as it is described below, to be one of the generalizations of algorithmic model of estimation of analogies. For AEA models a number of useful combinatorial formulas (algorithms) to calculate the analogy measure of objects and of objects and classes were proven [2]. These are the basic values for the decision making rules in AEA. In these models large number of parameters appears, being consecutively approximated using the appropriate optimization procedures. For this reason, a special control set besides the learning set is considered having the same formal structure as the learning set.

Considering classification correctness conditions for the set of given objects by the decision procedure we get a system of restrictions/inequalities, which may not be consistent. In the simplest case a system of linear inequalities appear and then we receive a problem of approximating the maximal consistent subsystem of this basic requirements system. In terms of Boolean functions this is equivalent to the well known optimization problem of determining of one of the maximal upper zeros of a Monotone Boolean function when it is given by an operator.

LSA is based on implementation of additional logical treatments on learning set elements, and above the AEA specific metric considerations. Some formalization of additional properties on classification in this case is related to the terms of Boolean functions and especially - to the reduced disjunctive normal forms of them. Let us consider a set of logical variables (properties) x1,x2,...,xn and let we have two types/classes for classification:

K1 and K2. Let K1, and K2, and is an unknown object in the sense of classification. We say, that is separated by the information of for if, where is summation by mod 2 operation.

After this assumption we get, that the reduced disjunctive normal forms of two complementary partially defined Boolean functions describe the structure of information enlargement of the learning set. This construction is extending the model of estimation of analogies. It was shown that the logical separators divide the object sets into three subsets, where only one of them needs the treatment by AEA. This set is large enough for almost all weakly defined Boolean functions, but for the functions with the property of compactness it is small. Let, for 0 k0 < k1 n Fn,k0,k1 is the set of all Boolean functions consisting of pair of k0 and n - k1 spheres centered at 0 and 1 respectively as the sets of zeros and ones of the function. On the remainder part of vertices of n cube the assignment/evaluation of the functions are arbitrary. This functions satisfy the compactness assumptions, and their quantity is not less than 2 ( n )2n for an appropriate ( n ) 0 with n 0. For these functions, also, it is enough learning set, consisting of any n2n- ( n ) n or more arbitrary points for recovering the full classification by means of logical separators procedure. This is an example of postulations considered. The given one is relating the metric and logic structures and suppositions, although separately oriented postulations are listed as. The follow up articles will describe the mixed hierarchy of recognition metric-logic interpretable hypotheses, which helps to allocate classification algorithms to the application problems.

2. Logic Based Model Solving the main problem of pattern recognition or classification assumes that indirect or informal information or data on classification K1,K2,...,Kl is given. Often this information is in form of appropriate conditions in an analogy to the compactness hypothesis, which in the very common shapes assumes, that given a metric in the space of all objects and that closer values of classification predicate corresponds to the pairs of "near" objects of. We assume that objects of are coded - characterized by the collections of values of n properties x1,x2,...,xn. Then each object is identified with the corresponding point of the n -dimensional characteristic space. So under the compactness of classes we assume the geometrical compactness of sets of points in the characteristic space, which corresponds to the elements of classes K1,K2,...,Kl and the Fourth International Conference I.TECH 2006 consecutive adjustments of this property can be given in the following descriptive form: closer neighborhoods of class elements belong to the same class; the distance increase from a class element increases the class change probability; for elements pairs of different classes there exist simple paths in three parts – classes and a limited transition area in the middle.

Above we already considered the general formalization models of hypothesis by metrics and by logic. More formalizations move to more restricted sets of allowable classifications and in this regard it is extremely important to determine the level of formalisms applied. During the practical classification problem arrangements it is to check the satisfaction level of the application problem to the metric and/or logic hypothesis. Resolution is l conditioned by the properties of the given learning set. On the other hand there are more different i i=conditions and methods of classification, which are very far in similarity to the model of compactness. These structures require and use other formalisms, providing the solution tools to the wide amounts of practical of pattern recognition problems. Such are the classes of algorithms of estimation of analogies, test's algorithms [2] potential function methods [4], etc.

Материалы этого сайта размещены для ознакомления, все права принадлежат их авторам.
Если Вы не согласны с тем, что Ваш материал размещён на этом сайте, пожалуйста, напишите нам, мы в течении 1-2 рабочих дней удалим его.