# Announcing Neural Networks in GPdotNET

Last few months I was playing with Artificial Neural Network (ANN) and how to implement it in to the GPdotNET. ANN is one of the most popular methods in Machine Learning, specially Back Propagation algorithm. First of all, the Artificial Neural Network is more complex than Genetic Algorithm, and you need to dive deeper in to math and other related fields in order to understand some of the core concept of the ANN. But likely there are tons of fantastic learning sources about ANN. Here is my recommendation of ANN learning sources:

First of all there are several MSDN Magazine articles about ANN and how to implement it in C#.

If you want to know what’s behind the scene of ANN, read this fantastic online book with great animations of how neuron and neural networks work.

1. Neural Networks and Deep Learning,  by Michael Nielsen.

There is a series you tube video about ANN.

Open source libraries about ANN in C#:

1.  AForge.NET. – Computer Vision, Artificial Intelligence, Robotics.

2. numl – Common Machine Learning Algorithms by Seth Juarez

The first GPdotNET v4.0 beta will be out very soon.

# Alati za analizu rezultata eksperimentalnog istraživanja

Bilo da pišete neki naučni rad, magistarsku ili doktorsku tezu, u prilici ste da baratate sa rezultatima vašeg istraživanja, koji su većinom u diskretnom obliku. Diskretni oblik rezultata istraživanja prvenstveno je dat u tabelarnom obliku pri kojem postoji nekoliko ulaznih parametara  te jedna ili više izlaznih varijabli.
Pretpostavimo da ste vršili određeno mjerenje, npr. silu rezanja, a da ste pri tom varirali dijametar alata i posmak. U tom slučaju rezultat vašeg mjerenja može biti tabela slična prikazanoj:

```RB          s[mm/o]          d[mm]          F[ N]
---------------------------------------------------
1           0,25              8            318,8
2           0,35              8            437
3           0,25             14            450
4           0,35             14            530,3
5           0,3              11            445,6
6           0,3              11            467
7           0,3              11            475,5
8           0,3              11            456,8
9           0,3              11            469
10           0,38             11            480,8
11           0,23             11            399
12           0,3              16            588,2
13           0,3               7            320
----------------------------------------------------
```

Po meni najbolji alat za modeliranje podataka datih u diskretnom obliku jeste Wolframova Mathematica. Da bi dobili regresijske modele pomoću Mathematica potrebno je eksperimentalne podatke pripremiti, odnosno definisati varijablu eksperiment sa vrijednostima iz tabele.Od eksperimentalnih rezultata prikazanih tabelom ptrebno je izvršiti regresijsku analizu i definisati matematički model, odnosno funkcionalnu zavisnos dijametra, posmaka od sile bušenja.

Izvorni kod prikazan na narednom listingu predstavlja jedan od načina kako prikazati podatke preko varijable, a koja predstavlja listu eksperimentalnih podataka.

```</p>

<pre>eksperiment={{0.25,8,318.8},{0.35,8,437},{0.25,14,450},{0.35,14,530.3},{0.3,11,445.6},{0.3,11,467},{0.3,11,475.5},{0.3,11,456.8},{0.3,11,469},{0.38,11,480.8},{0.23,11,399},{0.3,16,588.2},{0.3,7,320}}
```

Sada kada imamo varijablu, vrlo je jednostavno dobiti matematičke modele. Varijabla predstvalja 2D polje koje se sastoji of vrsta i kolona naše polazne tabele.

Na primjer da bi dobili regresijski model drugog stepena sa linearnom međuzavisnosti među članovima potrebno je izvršiti komandu:

```rModel2=Fit[eksperiment,{1,x,x^2, y,y^2,x*y},{x,y}]
```

Gornjom komandom Mathematika će metodom najmanjih kvadrata odrediti kvadratni model. Kako se može vidjeti Fit komanda, kao jedan od argumenata, uzima i šemu modela. Šema modela predstavlja članove polinoma koji će se naći u matematičkom modelu. Nakon izvršavanja ove dvije komande Mathematica je vratila matematički model naglašen crvenim pravougaonikom:

Naravno Fit komanda uzima bilo koju kombinaciju faktora i bilo koji stepen polinoma, tako da se čitaocu ostavlja da sam istraži i ostale modele. Npr. vrlo je interesantno da se odredi regresijski model 3-ćeg stepena, sa linearnom i kvadratnom međuzavisnošću ulaznih parametara.

Još zgodnije izgleda kada se dobijeni regresijski model može prikazati grafički izvršavajući slijedeću komandu:

Vidjeli smo kako na jednostavan način mogu dobiti regresijski modeli od diskretnog skupa podataka koji može predstavljati vaše eksperimentalno istraživanje. Naravno sve ovo se može uraditi i u Microsoft Excelu samo sa malo više muke.

Modeliranje podataka metodom genetskog programiranja

Modelirati se mogu podaci i preko evolucijske metode genetsko programiranje preko koje se mogu dobiti vrlo kvalitetni modeli koji mogu biti dosta precizniji od regresijskih modela. Prednost evolucijskih modela (modela koji se dobiju nekom od evolucijkih metoda) jeste ta da oni ne zavise od stepena polinoma, niti od zavisnosti među ulaznim parametrima. Na ovaj način prirodnim putem se generiraju modeli, kao i međuzavisnost između ulaznih parametara. Jedan od alata koji koristi metodu genetsko programiranje za modeliranje rezultata eksperimenta je GPdotNET, koji na vrlo jednostavan i intuitivan način koristi metodu genetskog programirnaja pri izgradnji matematičkih modela. Više informacije o GPdotNET mozete pronaći na https://bhrnjica.net/GPdotNET.

Da bi rezultate eksperimenta prezentiane na gornjoj tabeli učitali u GPdotNET potrebno je formirati csv datoteku kojom ćemo definisati skup podataka za treniranje.

– Otvorite Notepad i kopirajte slijedeći tekst te sačuvajte datoteku pod naslovom SkupZaTreniranje.csv.

```!s[mm/o]         d[mm]         F[ N]
!---------------------------------------------------
0.25;8;318.8
0.35;8;437
0.25;14;450
0.35;14;530.3
0.3;11;445.6
0.3;11;467
0.3;11;475.5
0.3;11;456.8
0.3;11;469
0.38;11;480.8
0.23;11;399
0.3;16;588.2
0.3;7;320```

Primjetite da su kolone odvojene sa ‘;’ (tačka zarez), a kolone novim redom. Također važno je imati na umu da su decimalne cifre odvojene tačkom umjesto zarezom, te da ispred vrste koja predstavlja neki tekst, naziv kolone ili dr. mora biti stavljan zna !, odnosno da se označi kao linija koja se ne procesuira.

1. Pokrenimo GPdotNET i odaberimo New komandu. Pojavljuje nam se dijalog za odabir vrste modela koju želimo odrediti. Ostavite početne vrijednosti i pritisnite dugme OK.

2. Sada iz Load Data taba pritisnemo dugme “Training Data” izaberemo datoteku koju smo prethodno formirali i pritisnemo dugme OK.

3. U trećem koraku podešavamo parametre GP. Parametre je potrebno podesiti kako je prikazano na donjoj slici.

4. Sada nam samo ostaje da pokrenemo simulaciju traženja rješenja klikom na komandu RUN.

5. Kada smo dobili model koji nam odgovara preko “Result” taba možemo vidjeti oblik dobijenog modela, a preko Export komandi mozemo vršiti daljnju analizu rezultata.

Vidjeli smo kako vrlo jednostavno i efektivno možemo modeliati naše rezultate eksperimentalnih istraživanja bez suvišnog gubljenja vremena i podešavanja. Također, vidjeli smo kako sa GPdotNET možemo dobijati vrlo precizne matematičke modele dobijene metodom genetsko progamirnaje.

# Function optimization with Genetic Algorithm by using GPdotNET

Content

1. Introduction
2. Analytic function optimization module in GPdotNET
3. Examples of function optimizations
4. C# Implementation behind GPdotNET Optimization module

Introduction

GPdotNET is artificial intelligence tool for applying Genetic Programming and Genetic Algorithm in modeling and optimization of various engineering problems. It is .NET (Mono) application written in C# programming language which can run on both Windows and Linux based OS, or any OS which can run Mono framework. On the other hand GPdotNET is very easy to use. Even if you have no deep knowledge of GP and GA, you can apply those methods in finding solution. The project can be used in modeling any kind of engineering process, which can be described with discrete data. It is also be used in education during teaching students about evolutionary methods, mainly GP and GA. GPdotNET is open source project hosted at http://gpdotnet.codeplex.com

With releasing of  GPdotNET v2 it is also possible to find optimum for any analytic function regardless of independent variables. For example you can find optimum value for an analytically defined function with 2, 5, 10 or 100 independent variables. By using classic methods, function optimization of 3 or more independent variables is very difficult and sometimes impossible. It is also very hard to find optimum value for functions which are relatively complex regardless of number of independent variables.
Because GPdotNET is based on Genetic Algorithm we can find approximated optimum value of any function regardless of the limitation of number of independent variables, or complex definition. This blog post is going to give you a detailed and full description how to use GPdotNET to optimize function. Blog post will also cover C# implementation behind optimization proce by showing representation of Chromosome with real number, as well as Fitness calculation which is based on Genetic Programming tree expression. In this blog post it will also be presented several real world problem of optimization which will be solved with GPdotNET.

# Analitic Function Optimization Module in GPdotNET

When GPdotNET is opened you can choose several predefined and calucalted models from various domain problems, as weel as creating New model among other options. By choosing New model new dialog box appears like picture below.

By choosing Optimization of Analytic Function (see pic above) and pressing OK button, GPdotNET prepares model for optimization and opens 3 tab pages:

1. Analytic function,
2. Settings and
3. Optimize Model.

## Analytic function

By using “Analytic function” tab you can define expression of a function. More information about how to define mathematics expression of analytic function can be found on this blog post.

By using “Analytic definition tool” at the bottom of the page, it is possible to define analytic expression. Expression tree builder generates function in Genetic Programming Expression tree, because GPdotNET fully implements both methods. Sharing features of Genetic programming  in Optimization based Genetic Algorithm is unique and it is implement only in GPdotNET.

When the process of defining function is finished, press Finish button in order to proceed with further actions. Finish button action apply all changes with Optimization Model Tab. So if you have made some changed in function definition, by pressing Finish button changes will be send to optimization tab.
Defining expression of function is relatively simple, but it is still not natural way for defining function, and will be changed in the future. For example on picture 2, you can see Expression tree which represent:

$f(x,y)=y sin{4x}+1.1 x sin{2y}$.

## Setting GA parameters

The second step in optimization is setting Genetic Algorithm parameter which will be used in optimization process. Open the Setting tab and set the main GA parameters, see pic. 3.

To successfully applied GA in the Optimization, it is necessary to define:

1.  population size,
2. probabilities of genetic operators and
3. selection methods.

These parameters are general for all GA and GP models. More information about parameters you can find at https://bhrnjica.net/gpdotnet.

## Optimize model (running optimization)

When GA parameters are defined, we can start with optimization by selecting Optimization model tab. Before run, we have to define constrains for each independent variables. This is only limitation we have to define i  order to start optimization. The picture below shows how to define constrains in 3 steps:

1.  select row by left mouse click,
2. enter min and max value in text boxes
3. Press update button.

Perform these 3 actions for each independent variable defined in the function.

When the process of defining constrains is finished, it is time to run the calculation by pressing Optimize button, from the main toolbar(green button). During optimization process GPdotNET is presenting nice animation of fitness values, as well as showing current best optimal value. The picture above shows the result of optimization process with GPdotNET. It can be seen that the optimal value for this sample is $f_{opt}(9.96)=-100.22$.

# Examples of function optimization

In this topic we are going to calculate optimal value for some functions by using GPdotNET. Zo be prove that the optimal value is correct or very close to correct value we will use Wolfram Alpha or other method.

### Function: x sin(4x)+1.1 x sin(2y)

GP Expression tree looks like the following picture (left size):

Optimal value is found (right above picture) for 0.054 min, at 363 generation of total of 500 generation. Optimal value is f(8.66,9.03)=-18.59.

Here is Wolfram Alpha calculation of the same function. http://www.wolframalpha.com/input/?i=min+x*sin%284*x%29%2B+1.1+*y*+sin%282+*y%29%2C+0%3Cx%3C10%2C0%3Cy%3C10

### Function:  (x^2+x)cos(x),  -10≤x≤10

GP expression tree looks like the following picture (left size):

Optimal value is found for 0.125 min, at 10 generation of total of 500 generation. Optimal value is F(9.62)=-100.22.

Here is Wolfram Alpha calculation of the same function. http://www.wolframalpha.com/input/?i=minimum+%28x%5E2%2Bx%29*cos%28x%29+over+%5B-10%2C10%5D

### Easom’s function fEaso(x1,x2)=-cos(x1)•cos(x2)•exp(-((x1-pi)^2+(x2-pi)^2)), -100<=x(i)<=100, i=1:2.

GP expression tree looks like the following picture (left size):

Optimal value is found for 0.061 min, at 477 generation of total of 500 generation. Optimal value is F(9.62)=-1, for x=y=3.14.

Function can be seen at this MatLab link.

# C# Implementation behind GPdotNET Optimization module

GPdotNET Optimization module is just a part which is incorporated in to GPdotNET Engine. Specific implementation for this module is Chromosome implementation, as well as Fitness function. Chromosome implementation is based on  floating point value instead of classic binary representation. Such a Chromosome contains array of floating point values and each element array represent function independent variable. If the function contains two independent variables (x,y) chromosome implementation will contains array with two floating points. Constrains of chromosome values represent constrains we defined during settings of the optimization process. The following source code listing shows implementation of GANumChrosomome class in GPdotNET:

```public class GANumChromosome: IChromosome
{
private double[] val = null;
private float fitness = float.MinValue;
//... rest of implementation
}
```

When the chromosome is generated array elements get values randomly generated between min and max value defined by function definition. Here is a source code of Generate method.

```///
/// Generate values for each represented variable
///
public void Generate(int param = 0)
{
if(val==null)
val = new double[functionSet.GetNumVariables()];
else if (val.Length != functionSet.GetNumVariables())
val = new double[functionSet.GetNumVariables()];

for (int i = 0; i < functionSet.GetNumVariables(); i++)

}
```

Mutation is accomplish when randomly chosen array element randomly change itc value. Here is a listing:

```///
///  Select array element randomly and randomly change itc value
///
public void Mutate()
{
//randomly select array element
//randomly generate value for the selected element
}
```

Crossover is little bit complicated. It is implemented based on Book Practical Genetic Algoritms see pages 56,57,48,59. Here is an implementation:

```///
/// Generate number between 0-1.
/// For each element array of two chromosomes exchange value based on random number.
///
///
public void Crossover(IChromosome ch2)
{
GANumChromosome p = (GANumChromosome)ch2;
double beta;
for (int i = crossoverPoint; i < functionSet.GetNumVariables(); i++)
{
val[i] = val[i] - beta * (val[i] - p.val[i]);
p.val[i] = p.val[i] + beta * (val[i] - p.val[i]);
}
}
```

Fitness function for Optimization is straightforward, it evaluates each chromosome against tree expression. For minimum the better chromosome is lower value. For maximum better chromosome is the chromosome with higher fitness value. Here is a implementation of Optimizatio Fitness function:

```///
/// Evaluates function agains terminals
///
///
///
///
public float Evaluate(IChromosome chromosome, IFunctionSet functionSet)
{
GANumChromosome ch = chromosome as GANumChromosome;
if (ch == null)
return 0;
else
{
//prepare terminals
var term = Globals.gpterminals.SingleTrainingData;
for (int i = 0; i < ch.val.Length; i++)
term[i] = ch.val[i];

var y = functionSet.Evaluate(_funToOptimize, -1);

if (double.IsNaN(y) || double.IsInfinity(y))
y = float.NaN;

//Save output in to output variable
term[term.Length - 1] = y;

if (IsMinimize)
y *= -1;

return (float)y;
}
}
```

# Summary

We have seen that Function optimization module within GPdotNET is powerful optimization tool. It can find pretty close solution for very complex functions regardless of number of independent variables. Optimization module use Genetic Algorithm method with floating point value chromosome representation described in several books about GA. It is fast, simple and can be used in education as well as in solving real problems. More info about GPdotNET can be found at https://bhrnjica.net/gpdotnet.