# GPdotNET v4.0 has been released

After almost two years of implementation, I am proud to announce the forth version of the Open source project called GPdotNET v4.0. The latest version completely implements Genetic Programming and Artificial Neural Network for supervised learning tasks in three kind of problems: regression, binary and multiclass classification. Beside supervised learning tasks, with GPdotNET you can solve several Linear Programming problems: Traveling Salesman, Assignment and Transportation problems. The source code and binaries can be download from Github page: https://github.com/bhrnjica/gpdotnet/releases/tag/v4.0

Figure 1. Main Window in GPdotNET v4.0

# Introduction

In 2006 the GPdotNET started as post-graduate semester project, where I was trying to implement simple C# program based on genetic programming. After successfully implemented console application, started to implement .NET Windows application to be easy to use for anyone who wants to build mathematical model from the data based on genetic programming method. In November 2009 GPdotNET became an open source project, by providing the source code and installer. Since then I have received hundreds of emails, feedbacks, questions and comments. The project was hosted on http://gpdotnet.codeplex.com. In 2016 I decided to move the project to GitHub for better collaboration and compatibility, and can be found at http://github.com/bhrnjica/gpdotnet. However, for backward compatibility, the old hosting site will be live as long as the codeplex.com would be live. Since the beginning of the development, my intention was that the GPdotNET would be cross-OS application which can be run on Windows, Linux and Mac. Since version 2, GPdotNET can be compiled against .NET and Mono, and can be run on any OS which has Mono Framework installed. Beside this fact, vast majority of users are using GPdotNET on Windows OS.

GPdotNET is primarily used on Academia by helping engineers and researchers in modelling and prediction various problems, from the air pollution, water treatment, rainfall prediction, to the various modelling of machining processes, electrical engineering, vibration, automotive industry etc. GPdotNET is used in more than ten doctoral dissertations (known to me) and master thesis, nearly hundreds paper used GPdotNET in some kind of calculation.

# Modeling with GPdotNET (New in GPdotNET v4.0)

Working with GPdotNET requires the data. By providing the learning algorithms GPdotNET uses a data of the research or experimental measures to learn about the problem. The results of learning algorithms are analytical models which can describe or predict the state of the problem, or can recognize the pattern. GPdotNET is very easy to use, even if you have no deep knowledge of GA, GP or ANN. Appling those methods in finding solutions can be achieved very quickly. The project can be used in modeling any kind of engineering process, which can be described with discrete data, as well as in education during teaching students about evolutionary methods, mainly GP and GA, as well as Artificial Neural Networks.

Working in GPdotNET follows the same procedures regardless of the problem type. That means you have the same set of steps when modelling with Genetic Programming or Neural Networks. In fact, GPdotNET contains the same set of input dialogs when you try to solve Traveling Salesman Problem with Genetic Algorithm or if you try to solve handwriting recognition by using Backpropagation Neural Networks. All learning algorithms within GPdotNET share the same UI.

The picture below shows the flowchart of the modelling in GPdotNET. The five steps are depicted in the graphical forms surrounded with Start and Stop item.

Figure 2. Modelling layout in GPdotNET 4.0

After GPdotNET is started main window is show, and the modelling process can be started.

## Choosing the Solver Type

The first step is choosing the type of the solver. Which solver you will use it depends on your intention what you want to do. Choosing solver type begins when you press “New” button, the “GPdotNET Model creation wizard” appear. Soler types are grouped in two categories. The first group (on the left side) contains models implemented prior to v4.0 version. It contains solvers which apply GP in modelling regression problems, and GP in optimization of the GP models. In addition, you can perform optimization of any analytically defined function by using “Optimization of the Analytic function”. Also, there are three linear programming problems which GPdotNET can solve using GA.

On the right side, there are two kind of solvers: GP or ANN, which are not limited to solve only regression. Both GP or ANN can build model for regression, binary or multi-class problems. Which type of problem GPdotNET will use, depends of the type of the output column data (label column).

Figure 3. Available model types

## Loading Experimental Data (new in GPdotNET 4.0)

GPdotNET uses powerful tool for importing your experimental data regardless of the type. You can import numerical, binary or classification data by using Importing Data Wizard. With GPdotNET importing tools you can import any kind of textual data, with any kind of separation character.

Figure 4. Importing dataset dialog

After the data is imported in forms of columns and rows, GPdotNET implemented set of very simple controls which can perform very powerful feature engineering. For each loaded column, you can set several types of metadata: column name, column type (input, output, ignore), normalization type (minmax, gauss), and missing value (min, max, avg). With those options, you can achieve most of the modelling scenarios. Before “Start Modelling” minimum conditions must be achieved.

• At least one column must be of “input” parameter type.
• At least one column must be of “output” parameter type.

Which type of problem (regression, binary or multi class) will be used depends of the type of the output column. The following cases are considered:

1. in case of regression problems ouput column must be of numeric type.
2. in case of binary classinfication output column must be of binary type.
3. in case of multi class classinfication output column must be of categorical type.

Figure 5. Defining metadata for training data set

When the column should not be part of the feature list, it can be easily ignored when the Column Type is set to “ignore“, or Param type is set to “string“.

Figure 6. Changing column type to binary

Change value of metadata by double click on the current value, select new values from available popup list. When you done with Feature Engineering press “Start Modelling” button and the process of modelling can be start.

Note: After you press Start Modelling button you can still change values of metadta, but after every change of the metadata values, Start Modelling button must be pressed.

## Setting Learning Parameters

Figure 7. Setting parameters Dialog

After data is loaded and prepared successfully, you have to set parameters for the selected method. GPdotNET provides various parameters for each method, so you can set parameters which can provides and generates best output model. Every parameter is self-explanatory.

## Searching for the solution

GPdotNET provides visualization of the searching solution so you can visually monitor how GPdotNET finds better solution as the iteration number is increasing. Beside searching simulation, GPdotNET provides instant result representation (only GP models), so any time the user can see what is the best solution, and how currently best solution is good against validate or predicted set of data. (Result and Prediction tabs).

Figure 8. Searching simulation in GPdotNET

## Saving and exporting the results:

GPdotNET provides several options you can choose while exporting your solution. You can export your solution in Excel or text file, as well as in Wolfram Mathematica or R programming languages (GP Models only). In case of ANN model the result can se exported only to Excel.

Figure 9. Searching simulation in GPdotNET

Besides parameters specific to learning algorithm, GPdotNET provides set of parameters which control the way of how iteration process should terminates as well as how iteration process should be processed by means of parallelization to use the multicore processors. During the problem searching GPdotNET records the history, so you can see when the best solution is found, how much time pass since the last iteration process start, or how much time is remaining to finish currently running iteration process.

Due to the fact that GP is the method which requires lot of processing time, GPdotNET provides parallelization, which speed up the process of searching. Enabling or disabling the parallelization processing is just a click of the button.

## GPdotNET Start Page

In case you have no data or just want to test the application, GPdotNET providers 15 data samples for demo purposes. All samples are grouped in problems specific groups: Approximation and Regressions, Binary Classification, Multi-class classification, Time series modelling and Linear Programming.

Figure 10. Modelling layout in GPdotNET

By click on appropriate link sample can be opened to see current result and parameter values. You can easily change parameter, press Run button and search for another solution. This is very handy to introduce with GPdotNET. In any time, you can stop searching and export current model or save current state of the program.

Final note: The project is licensed under GNU Library General Public License (LGPL). For information about license and other kind of copyright e.g. using the application in commercial purpose please see http://github.com/bhrnjica/gpdotnet/blob/master/license.md.

In case you need to cite it in scientific paper or book please refer to  https://wordpress.com/post/bhrnjica.net/5995

Advertisements

# GPdotNET 4.0 first look: Classification with Neural Networks

After some time of implementation and testing, the new version of GPdotNET is out. Go to codeplex page and download it. This is huge step in development of this project, because it contains completely new module based on Artificial Neural Network and other optimization methods e.g. Particle Swarm Optimization.

Almost all aspects of the architecture are changed, or will be changed when this version would be released.

The new Experiment class will replace existing classes for handling experimental data in GA based models, which is not implemented yet. Also New start page will contain more pre-calculated examples.

For this beta here are the new features:

1. New Start Page will be extended with new examples of Neural Nets : – binary classification, – multiclass classification and – regressions examples.
2. Improved module for loading experimental data, which now supports nonnumeric data like categorical or binary data.
3. Depending of the output column of loaded experimental data different learning algorithm is selected. For example if the column is of categorical type, GPdotNET selects Neural Net algorithm with Cross-Entropy and Particle Swarm optimization learning algorithm. If the output column is numerical, GPdotNET selects the Neural Nets with Backpropagation learning algorithm. Currently only two learning algorithms are implemented. More will be implemented in the next beta release.

## Classification problem with GPdotNET

This topic will give quick tutorial how to model classification problem with GPdotNET by using Neural Nets. For this tutorial you need some experimental data which you can download from this location.

• Open GPdotNET choose New Command
• New Dialog pops up, check Artificial Neural Nets, and Press Ok Button.

After Solver Type selection, GPdotNET Creates “Load Experiment” Page in which you can load experimental data, and define the percentage of data for testing the model.

• Press Load Data button.
• New Popup dialog appears.
• Press File button select the file you previously downloaded.
• Check Semicolon and First Row Header check buttons.

Note: When the data is analyzed correctly you should see vertical line “|” between columns. Otherwise data will not be loaded correctly.

• When the experimental data is formatted correctly, press the “Import Data” button.

The next step is preparing columns for modeling.

For each columns we have to set:

• a) proper type of the column (numeric, categorical or binary),
• b) type of the parameter (input, output or ignore)
• c) normalization method (MinMax,Gauss or Custom normalization of the column values ).

To change Column Type, double click on the cell which shows the column type, combo box appears with list of available types.

• To change Parameter Type, double click on the cell which shows the param type, combobox appears with list of available parameter types.
• To change Normalization Type, double click on the cell which shows MinMax value, combobox appears with list of available normalization types.

Note: you can set only one Output column. Put Parameter Type to Ignore if you want to skip column from modelling.

Now we have experimental data, and we can start modelling process. Before that we need to choose how much data will be treated for testing. Enter 10% for testing the data and press Start Modelling button.

Now we have more Pages:

1. Settings page for setting the parameters of the Neural Nets

2. Run page for simulation of searching solution

3. Prediction page which you can see how solution is good against testing data.

### Settings Page

As you can see in Settings Page you can set various parameters for Neural Nets and Particle Swarm Optimization. Try to train the model with different parameters values. For this tutorial you can leave parameters as are.

### Modeling Page

Modeling page contains two diagrams. The first diagram shows errors with respect of the iteration number which is very useful for monitoring the searching process. The second diagram which is below the previous shows current best solution (blue line) in comparison with the experimental data (red line). Also on the left side of the page, you can see several iteration number, error, and other information about searching process.

### Prediction Page

Prediction page shows how current best model predict data. Predict page contains tabular and graphical representation of predicted data, which is compared with data for testing.

## Features don’t work in this BETA

1. Exporting Neural Network model

2. Saving to gpa file Neural Network Models

# Alati za analizu rezultata eksperimentalnog istraživanja

Bilo da pišete neki naučni rad, magistarsku ili doktorsku tezu, u prilici ste da baratate sa rezultatima vašeg istraživanja, koji su većinom u diskretnom obliku. Diskretni oblik rezultata istraživanja prvenstveno je dat u tabelarnom obliku pri kojem postoji nekoliko ulaznih parametara  te jedna ili više izlaznih varijabli.
Pretpostavimo da ste vršili određeno mjerenje, npr. silu rezanja, a da ste pri tom varirali dijametar alata i posmak. U tom slučaju rezultat vašeg mjerenja može biti tabela slična prikazanoj:

RB          s[mm/o]          d[mm]          F[ N]
---------------------------------------------------
1           0,25              8            318,8
2           0,35              8            437
3           0,25             14            450
4           0,35             14            530,3
5           0,3              11            445,6
6           0,3              11            467
7           0,3              11            475,5
8           0,3              11            456,8
9           0,3              11            469
10           0,38             11            480,8
11           0,23             11            399
12           0,3              16            588,2
13           0,3               7            320
----------------------------------------------------


Po meni najbolji alat za modeliranje podataka datih u diskretnom obliku jeste Wolframova Mathematica. Da bi dobili regresijske modele pomoću Mathematica potrebno je eksperimentalne podatke pripremiti, odnosno definisati varijablu eksperiment sa vrijednostima iz tabele.Od eksperimentalnih rezultata prikazanih tabelom ptrebno je izvršiti regresijsku analizu i definisati matematički model, odnosno funkcionalnu zavisnos dijametra, posmaka od sile bušenja.

Izvorni kod prikazan na narednom listingu predstavlja jedan od načina kako prikazati podatke preko varijable, a koja predstavlja listu eksperimentalnih podataka.

</p>

<pre>eksperiment={{0.25,8,318.8},{0.35,8,437},{0.25,14,450},{0.35,14,530.3},{0.3,11,445.6},{0.3,11,467},{0.3,11,475.5},{0.3,11,456.8},{0.3,11,469},{0.38,11,480.8},{0.23,11,399},{0.3,16,588.2},{0.3,7,320}}


Sada kada imamo varijablu, vrlo je jednostavno dobiti matematičke modele. Varijabla predstvalja 2D polje koje se sastoji of vrsta i kolona naše polazne tabele.

Na primjer da bi dobili regresijski model drugog stepena sa linearnom međuzavisnosti među članovima potrebno je izvršiti komandu:

rModel2=Fit[eksperiment,{1,x,x^2, y,y^2,x*y},{x,y}]


Gornjom komandom Mathematika će metodom najmanjih kvadrata odrediti kvadratni model. Kako se može vidjeti Fit komanda, kao jedan od argumenata, uzima i šemu modela. Šema modela predstavlja članove polinoma koji će se naći u matematičkom modelu. Nakon izvršavanja ove dvije komande Mathematica je vratila matematički model naglašen crvenim pravougaonikom:

Naravno Fit komanda uzima bilo koju kombinaciju faktora i bilo koji stepen polinoma, tako da se čitaocu ostavlja da sam istraži i ostale modele. Npr. vrlo je interesantno da se odredi regresijski model 3-ćeg stepena, sa linearnom i kvadratnom međuzavisnošću ulaznih parametara.

Još zgodnije izgleda kada se dobijeni regresijski model može prikazati grafički izvršavajući slijedeću komandu:

Vidjeli smo kako na jednostavan način mogu dobiti regresijski modeli od diskretnog skupa podataka koji može predstavljati vaše eksperimentalno istraživanje. Naravno sve ovo se može uraditi i u Microsoft Excelu samo sa malo više muke.

Modeliranje podataka metodom genetskog programiranja

Modelirati se mogu podaci i preko evolucijske metode genetsko programiranje preko koje se mogu dobiti vrlo kvalitetni modeli koji mogu biti dosta precizniji od regresijskih modela. Prednost evolucijskih modela (modela koji se dobiju nekom od evolucijkih metoda) jeste ta da oni ne zavise od stepena polinoma, niti od zavisnosti među ulaznim parametrima. Na ovaj način prirodnim putem se generiraju modeli, kao i međuzavisnost između ulaznih parametara. Jedan od alata koji koristi metodu genetsko programiranje za modeliranje rezultata eksperimenta je GPdotNET, koji na vrlo jednostavan i intuitivan način koristi metodu genetskog programirnaja pri izgradnji matematičkih modela. Više informacije o GPdotNET mozete pronaći na https://bhrnjica.net/GPdotNET.

Da bi rezultate eksperimenta prezentiane na gornjoj tabeli učitali u GPdotNET potrebno je formirati csv datoteku kojom ćemo definisati skup podataka za treniranje.

– Otvorite Notepad i kopirajte slijedeći tekst te sačuvajte datoteku pod naslovom SkupZaTreniranje.csv.

!s[mm/o]         d[mm]         F[ N]
!---------------------------------------------------
0.25;8;318.8
0.35;8;437
0.25;14;450
0.35;14;530.3
0.3;11;445.6
0.3;11;467
0.3;11;475.5
0.3;11;456.8
0.3;11;469
0.38;11;480.8
0.23;11;399
0.3;16;588.2
0.3;7;320

Primjetite da su kolone odvojene sa ‘;’ (tačka zarez), a kolone novim redom. Također važno je imati na umu da su decimalne cifre odvojene tačkom umjesto zarezom, te da ispred vrste koja predstavlja neki tekst, naziv kolone ili dr. mora biti stavljan zna !, odnosno da se označi kao linija koja se ne procesuira.

Kada imamo ovakvu datoteku sada možemo učitati podatke u GPdotNET.

1. Pokrenimo GPdotNET i odaberimo New komandu. Pojavljuje nam se dijalog za odabir vrste modela koju želimo odrediti. Ostavite početne vrijednosti i pritisnite dugme OK.

2. Sada iz Load Data taba pritisnemo dugme “Training Data” izaberemo datoteku koju smo prethodno formirali i pritisnemo dugme OK.

3. U trećem koraku podešavamo parametre GP. Parametre je potrebno podesiti kako je prikazano na donjoj slici.

4. Sada nam samo ostaje da pokrenemo simulaciju traženja rješenja klikom na komandu RUN.

5. Kada smo dobili model koji nam odgovara preko “Result” taba možemo vidjeti oblik dobijenog modela, a preko Export komandi mozemo vršiti daljnju analizu rezultata.

Vidjeli smo kako vrlo jednostavno i efektivno možemo modeliati naše rezultate eksperimentalnih istraživanja bez suvišnog gubljenja vremena i podešavanja. Također, vidjeli smo kako sa GPdotNET možemo dobijati vrlo precizne matematičke modele dobijene metodom genetsko progamirnaje.

# GPdotNET v3.0 is out

I am very proud to announce GPdotNET v3.0, the new version which brings 3 new solvers based on Genetic Algorithm and several new features. In previous posts I was  writing about those new features:

The source code and Click Once installation are uploaded in codeplex site, so go and grab the new version. hppt://gpdotnet.codeplex.com

I am asking for feedback and bug reports. If you find something strange post a comment here on anywhere in blog site or codeplex site. I will try to answer as quickly as possible.

Here are screenshots with new Assignment and Transportation solvers in GPdotNET:

Loaded  training data of Assignment problem

Simulation Tab page with optimal result of Assignment problem

Loaded  training data of Transportation problem

Simulation Tab page with optimal result of Transportation problem

# Skrgic Selection in GPdotNET

## Introduction

This document presents Skrgic Selection Method, the one of several selection methods in GPdotNET. While I was developing GPdotNET I have had several conversation with my friend Fikret Skrgic (master in computer scinence and math) regarding selection in Evolutionary algorithms. He is an incredible man, and in just a few minutes while I was describing to him what I want from a new way of selection, he came out with the basic idea of the new selection method. After that time in just a few mails I had a new selection method ready for implementation in GPdotNET. I gave the name to the new selection method by his surname. It is a little thankfulness to him.

In the flowing text it will be presented the idea behind Skrgic Selection.

## Liner Skrgic Selection (LSS)

The Idea behind this selection is based on chromosome fitness. The rule is simple: The bigger chromosome fitness gives bigger chance to select better chromosome. In Population, chromosomes are already sorted from best to worst chromosomes.
The process of selection is the following:

1. Find the maximum and minimum fitness value from the population.
2. The best chromosome (maximum fitness) has position 0 in the population (zero based index), and the worst chromosome has index $N-1$.
3. Let’s give them name as $f_{max}$ and $f_{min}$ respectively.
4. Choose random number $r$ between $f_{min}$ and $f_{max}$.
5. Choose random chromosome from the Population $chrom$.
6. Compare the values of $r$ and $f_{chrom}$. If the $f_{chrom}$ is greater than $r$, select $chrom$.
7. Repeat steps 3 and 4 until you select one chromosome.
8. Repeat steps 2,3,4,5 until you select $n$ chromosomes.

The following picture shows graph of linear Skrgic selection:

Linear Skrgic Selection (LSS)

## Nonlinear Skrgic Selection (NSS)

In LSS we have linear graph of selection probability. This means that if we have chromosome with $f_{chrom}$ fitness value, and another chromosome with $\frac{f_{chrom}}{2}$ fitness value, the probability of selection of two chromosomes is $p$ and $\frac{p}{2}$ respectively. This means that probability of selection is growing linearly.
If we want to change probability between chromosomes we can define a factor $k$ to be like selection pressure on whole chromosomes in the population. So let the k be a real value, and define the fitness of each chromosome as:

$f_{nss}=f_{chrom} (1+k \frac{f_{chrom}}{f_{max}})$,

where:

• $f_{chrom}$ – fitness value of the chromosome,
• $f_{max}$ – maximum fitness value(fitness of the best chromosome of the population),
• $k$ – selection pressure,
• $f_{nss}$ – nonlinear fitness value of the chromosome.

We can conclude that the maximum nonlinear fitness value of the best chromosome is given by:

$f_{max(nss)}=f_{max}(1+k)$.

For various value of parameter $k$  we can define graph of probability selection like on the following picture:

Graph for various sample of NonLinear Skrgic Selection (NLSS)

On the picture above we can see if $k=0$ we get the standard Linear Skrgic Selection (LSS). We can also conclude that there is a fitness value when the probability of selection is constant and not depends of parameter $k$. Very interesting graph is when the $k=-1$. In this case the selection probability of the worst and the best chromosome are equal.

GPdotNET settings of GP parameters

# Yet another scientific work based on GPdotNET

University of Bihać will organize 8th Scientific Conference about Development and Revitalization of production (RIM 2011) this week. I am also on this conference with my work about Modeling Hardness of steel weld by genetic programming, which is based on my genetic programming tool GPdotNET. I am very proud to announce that this is my second work based on the GPdotNET. After my work about modeling and optimization of Drilling process by Evolutionary algorithms from 2007, this time it is about modeling steel weld. This work is introduction of my Master Thesis which is going to be out soon :).
The presentation will be at Saturday at 9:30. More abut the conference can be found at http://www.rim.tfb.ba.
Meet you there.

# The New version of GPdotNET is coming soon

The new stable release of the GPdotNET will be built on .NET 4.0, so the .NET 3.5 will not be supported at all. The reason for that is the compatibility with ParallelFX. From .NET 4.0 ParallelFX is integral part of the .NET 4.0 System namespace.

Some of the new feature:

1. Localization –The application will be localized on English and Bosnian language.

2. Improvements of the selection engine. Current 4 selections (Elite, Rank, Roulette-Wheel and Tournament) are optimized and rewritten.

a. There are 3 new selections:

i. Stochastic Uniform selection SUS
ii. Fitness Uniform Selection Scheme FUSS
iii. Skrgic Selection Scheme SSS

3. Parallel processing is also optimized but it is still not complete.

4. Fitness measure is expanded to over 15 types (similar to GeneExproTools 4.0):

1. Absolute Error with Selection Range
2. Absolute/Hits
3. MSE (mean squared error)
4. RMSE (root mean squared error)
5. MAE (mean absolute error)
6. RSE (relative squared error)
7. RRSE (root relative squared error)
8. RAE (relative absolute error)
9. Relative Error with Selection Range
10. Relative/Hits
11. rMSE (relative MSE)
12. rRMSE (relative RMSE)
13. rMAE (relative MAE)
14. rRSE (relative RSE)
15. rRRSE (relative RRSE)
16. rRAE (relative RAE)
17. R-square
18. Correlation Coefficient

5. Export results to Excel for further analysis

Screenshots:

Figure 1 Dialog for creating a model 1. Data model, 2. Time series model

Figure 2 Excel definition of the function, for Excel result export

Figure 3: 7 Selection methods

Figure 4: Over 15 Fitness type of measures

Figure 5: Export Options