PCR-primer Np REp XY RE0 XY RE1 …….. REn-1 XY REp Np -1 PCR-primer where XY є { AB, BC, CD, AC, BD } The selection criteria of the individuals in this case involves the encoding for an existing way to do this the strands of selection join two vertex of the graph neither initial nor final.

To be discarded are those individuals that do not start in the original city, those that do not finish in the final city as well as those that are found in repeated cities.

Selected are those structures which in the stretch O to N are covered by the strands of selection.

The format for the introduction of individuals in the mating zone is the following:

PCR-primer Np Nm REp XY RE0 XY RE1 …….. REn-1 XY REp Np -1 PCR-primer where XY є { AB, BC, CD, AC, BD } We proceed then to the crossing mutation and evaluation of the degree of adaptation of the individuals prior to their introduction to the population and the process is repeated until the convergence of the population obtaining in each iteration the degree of adaptation Conclusion The generation of this work has produced a new approach to the simulation of genetic algorithms with DNA. The problem of fitness evaluation in a parallel form has been resolved satisfactorally. This does not imply that the definition would be independent of the problem at hand even though there are rules that facilitate such definitions and that can be solved by means of genetic algorithms.

In a GA simulated with DNA the concept fitness field disappears.

The coding of the individuals is closely related with the characteristics of electrophoretic migration.

The fitness of the individual is embedded in his coding and any attempt to add such field represents a grave error.

The addition of such field would mean a personalisation of the individual thus preventing a massive and anonymous parallelism.

Bibliography [Adleman, 1994] Leonard M. Adleman. Molecular Computation of Solutions to Combinatorial Problems. Science (journal) (11): 1021–1024. 1994.

[Adleman, 1998 ] Leonard M. Adleman. Computing with DNA. Scientific American 279: 54-61.1998.

[Amos, 2005] Martyn Amos.Theoretical and Experimental DNA Computation, Springer. ISBN 3-540-65773-8. 2005.

[Baum, 1996] Eric B. Baum, Dan Bohec. Running dynamic programming algorithms on a DNA computer. Proceedings of the second Annual Meeting on DNA Based Computers. 1996.

[Bonen, 1996] Dan Boneh, Christopher Dunworth, Richard J. Lipton, and Jiri Sgall. On the Computational Power of DNA.

DAMATH: Discrete Applied Mathematics and Combinatorial Operations Research and Computer Science 71. 1996.

[Darwin C] Charles Darwin. On the Origin of the Species by the Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. Murray, London, 1859.

[Gonzlez, 2002] Eduardo Gonzlez Jimnez, Valery I. Poltev. La simulacin computacional de procesos genticos a nivel molecular. Ciencia y Cultura. Elementos, 479(47): September- November. 2002.

[Holland, 1975] J.H.Holland. Adaptation in Natural and Artificial Systems. MIT Press. 1975.

[Holland, 2000] Holland, J. H. (2000). Building blocks, cohort genetic algorithms, and hyperplane-defined functions.

Evolutionary Computation, 8:4, 373-391.

Knowledge Engineering [Kari, 2000] Lila Kari, Greg Gloor, Sheng Yu. Using DNA to solve the Bounded Post Correspondence Problem. Theoretical Computer Science 231 (2): 192–203. 2000.

[Lipton, 1995] Richard J.Lipton. Using DNA to solve NP-Complete Problems. Science, 268:542-545. April 1995. [Paun, 1998] Gheorge Paun, Grzegorz Rozenberg, Arto Salomaa. DNA Computing - New Computing Paradigms, Springer-Verlag.

ISBN 3540641963. 1998.

[Mitchell, 1994] Mitchell, M., Holland, J. H., & Forrest, S. (1994). When will a genetic algorithm outperform hill climbing IAdvances in Neural Information Processing Systems 6, 51-58, MIT Press.

[Macek 1997] Milan Macek M.D. Denaturing gradient gel electrophoresis (DGDE) protocol. Hum Mutation 9: 136 1997.

Authors' Information Maria Calvino – Dep. Inteligencia Artificial. Facultad de Informatica, Universidad Politecnica de Madrid, Boadilla del Monte, 28660 Madrid, Spain; e-mail: maria.calvino@medicimage.com Nuria Gomez – Dep. Organizacion y Estructura de la Informacion, Escuela Universitaria de Informatica, Universidad Politecnica de Madrid, 28031 Madrid, Spain; e-mail: ngomez@dalum.eui.upm.es Luis Fernando de Mingo Lopez – Dep. Organizacion y Estructura de la Informacion, Escuela Universitaria de Informatica, Universidad Politecnica de Madrid, 28031 Madrid, Spain; e-mail: lfmingo@eui.upm.es ESTIMATING THE VOLUME FOR AREA FOREST INVENTORY WITH GROWING RADIAL BASIS NEURAL NETWORKS Angel Castellanos, Ana Martinez Blanco, Valentin Palencia Abstract: This paper proposes a new method in order to compute clusters and centers for classification and approximation of wood volume using a set of data that can be easily obtained such as: height, radius, surface, surfaces used in cubical proofs and in the forest inventory built process. The proposed model, using radial basis function Neural Networks, achieves a rapid convergence dealing these tasks. This method is compared to other methods getting better results concerning the fewer training patterns; it also classifies the trees under a valid cluster set. Some figures are shown in order to better explain the learning procedure and results of this clustering process Keywords: Neural Networks, Radial Basis Functions, Clustering, Forest Inventory.

Introduction Generally, to estimate wood volumes some standard formulas such as Huber’s and others have been used.

Because of simplicity and practicality the Huber’s formula is frequently used to volume estimation. A new approach was developed to predict volume when there are a few data and when different species of trees are combined and it is necessary to obtain the volume of wood, using radial basis function Neural Networks.

The research community has developed several different neural network models, such as backpropagation, radial basis function, growing cell structures [Fritzke 1994] and self-organizing feature maps [Kohonen 1989]. A common characteristic of the aforementioned models is that they distinguish between learning and a performance phase. Neural networks with radial basis functions have proven to be an excellent tool in approximation with few patterns. Most relevant research in theory, design and applications of radial basis function neural networks is due Fourth International Conference I.TECH 2006 to Moody and Darken [Moody and Darken, 1989], Renals [Renals, 1989] and to Poggio and Girossi [Poggio and Girosi, 1990].

Radial basis function (RBF) neural networks provide a powerful alternative to multilayer perceptron (MLP) neural networks to approximate or to classify a pattern set. RBFs differ from MLPs in that the overall input-output map is constructed from local contributions of Gaussian axons, require fewer training samples and train faster than MLP.

The most widely used method to estimate centers and widths consist on using an unsupervised technique called the k-nearest neighbour rule (see figure 1). The centers of the clusters give the centers of the RBFs and the distance between the clusters provides the width of the Gaussians. Computation of the centers, used in the kernels function of the RBF neural network, is being the main focus to study in order to achieve more efficient algorithms in the learning process of the pattern set. The choice of adequate centers implies a high performance, concerning the learning times, convergence and generalization.

Figure 1.- Radial Basis Function Neural Network.

Problem Description Volume parameter is one of the most important parameters in forest research when dealing with some forest inventories [Schreuder, H.T., Gregoire, T.G. and Word, G.B. 1993]. Usually, some trees are periodically cut in order to obtain such parameters using cubical proofs for each tree and for a given environment. This way, a repository is constructed to be able to compute the volume of wood for a given area and for a given tree specie in different environments. Stem volume is function of a tree’s height, basal area, shape, etc. It is one of the most difficult parameters to measure, because an error in the measure or assumptions for any one of the above factors will propagate to the volume estimate. Volume is often measured for specific purposes, and interpretation of the volume estimate will depend on the units of measurement standards of use, and others specifications.

Calculations of merchantable volume may also be based on true cubic volume. Direct and indirect methods for estimating volume are available [Hamilton, F. and Brack, C.L. 1999].

The method to estimate volumes used in forest are the tree volume tables or tree volume equations. Huber‘s volume equations are a very common equation used to estimating volume:

d V = h V denotes volume, h denotes length, d denotes diameter.

Another form of previous equation is:

d V = h = factor for the merchantable volume Here, we proposed a study of the potential wood forest amount, that is, the maximum amount of wood that can be obtained. All data are taken from an inventory of the M-1019 area at “Elenco” in Madrid (Spain), at “Atazar” village. Most of the trees belongs to the Pinus Pinaster family and a small amount to the Pinus Sylvestris family.

All this area is focused on the wood production. The area is divided into two different sub areas with a surface of 55.6 Ha and 46.7 Ha respectively. The main aim is to be able to forecast the wood volume and detect relationships between all the variables that are in our study. Variables taken into account are: normalized Knowledge Engineering diameter, total height, surface thickness, and radial growth in the last ten years. Normalized diameter has been measured in the whole feet of the two sub areas that made up the samples, provided they are larger than 12.5 cm till the last cluster of 60 cm. A parabolic regression analysis has been performed in order to obtain the cubical proofs to be compared to obtained results using neural networks.

Radial Basis Function Networks as Classifiers for Prediction in the Forest Products A radial basis function neural network has been implemented with four input neurons: diameter, thickness surface, diameter and height in order to estimate the volume of wood that can be used. The net uses a competitive rule with full conscience in the hidden layer and one output layer with the Tanh function, all the learning process has been performed with the momentum algorithm. Unsupervised learning stage is based on 100 epochs and the supervised learning control uses as maximum epoch 1000, threshold 0.01. We have performed an initial study using 260 patterns in training set; after a 90 patterns in training set and finally with only 50 patterns in training set, and the error MSE, are similar in three cases. Problem under study is prediction of volume of wood, and it is compared to other methods such as the Huber’s formula and the statistical regression analysis in order to estimate the amount of wood using typical tree variables: diameter, thickness and diameter growth. Neural networks had approximated in a good manner tested examples, getting a small mean squared error, see table below. Radial basis function neural network learns with only a few patterns, that is the way results using only 50 patterns are really excellent. For each of the tree species tested, the RBF gives less MSE estimated than the standard formulas Huber and Multivariate Analysis Regression.

Error-Huber Error-RBF error-regression multivariate MSE 0.05 0.007 0.Next step consists on forecasting the input variable importance (sensitive analysis) in the learning process.

Neural networks is a mapping f (x1, x2, x3, x4 ) : 4 where x1 = diameter(cm), x2 = thickness(cm), x3 = growth of diameter(cm), x4 = height(cm) in order to forecast variable x5 = volume(dm3). All center are stable in two points, that are those who signal the two main clusters, and that the net has been able to detect the two tree species.

Several matrixes have been computed; where columns are input variables to forecast and rows are hidden neurons. These matrixes show the center values. Variable X = diametergrowth takes the same value in both centers what it means that the study can be done without such variable obtaining similar values of MSE. Main centers of RBF approximate real clusters in the two forest areas, following table shows the real clustering.

Zone - species dR (cm) esp (cm) growing - dR High aR (cm) (cm) 1 19,49 5,28 3,19 6,2 33,71 7,38 3,91 10,Previous table shows the matrix where the columns represent the input variable and the rows represent the hidden neurons. The hyperspace is divided into different regions or clusters starting from 16. Later, the number of clusters has been decreased till the minimum number of possible clusters is reached in order to solve the problem minimizing the mean squared error. Two main centers are found in the hyperspace, see following figures.

Fourth International Conference I.TECH 2006 Four input variables and 16 clusters MSE=0.Four input variables and 12 clusters MSE=0.Four input variables and 8 clusters MSE=0.Four input variables and 5 clusters MSE= 0.Knowledge Engineering Four input variables and 4 clusters MSE=0.Three input variables and 4 clusters MSE=0.Three input variables and 3 clusters MSE=0.Two input variables and 4 MSE=0.Two input variables and 3 MSE=0.Fourth International Conference I.TECH 2006 All these tasks performed by RBF neural networks permits to classify the input patterns in the two main clusters belonging to the two tree species. Also, the variable representing the diameter growth is the less important one. If a neural network without such variable is implemented and with only 50 input patterns then results concerning the mean squared error are similar. When decreasing the number of input variables then the mean squared error increases, but forecasting results are still good if the importance of input variables is considered.

Материалы этого сайта размещены для ознакомления, все права принадлежат их авторам.
Если Вы не согласны с тем, что Ваш материал размещён на этом сайте, пожалуйста, напишите нам, мы в течении 1-2 рабочих дней удалим его.