Blog Archives

Testing and Validation CNTK models using C#


…continue from the previous post.
Once the model is build and Loss and Validation functions are satisfied our expectation, we need to validate and test the model using the data which was not part of the training data set (unseen data). The model validation is very important because we want to see if our model is trained well,so that can evaluates unseen data approximately same as the training data. Otherwise the model which cannot predict the output is called overfitted model. Overfitting can happen when the model was trained long enough that shows very high performance for the training data set, but for the testing data evaluate bad results.
We will continue with the implementation from the prevision two posts, and implement model validation. After the model is trained, the model and the trainer are passed to the Evaluation method. The evaluation method loads the testing data and calculated the output using passed model. Then it compares calculated (predicted) values with the output from the testing data set and calculated the accuracy. The following source code shows the evaluation implementation.

private static void EvaluateIrisModel(Function ffnn_model, Trainer trainer, DeviceDescriptor device)
{
    var dataFolder = "Data";//files must be on the same folder as program
    var trainPath = Path.Combine(dataFolder, "testIris_cntk.txt");
    var featureStreamName = "features";
    var labelsStreamName = "label";

    //extract features and label from the model
    var feature = ffnn_model.Arguments[0];
    var label = ffnn_model.Output;

    //stream configuration to distinct features and labels in the file
    var streamConfig = new StreamConfiguration[]
        {
            new StreamConfiguration(featureStreamName, feature.Shape[0]),
            new StreamConfiguration(labelsStreamName, label.Shape[0])
        };

    // prepare testing data
    var testMinibatchSource = MinibatchSource.TextFormatMinibatchSource(
        trainPath, streamConfig, MinibatchSource.InfinitelyRepeat, true);
    var featureStreamInfo = testMinibatchSource.StreamInfo(featureStreamName);
    var labelStreamInfo = testMinibatchSource.StreamInfo(labelsStreamName);

    int batchSize = 20;
    int miscountTotal = 0, totalCount = 20;
    while (true)
    {
        var minibatchData = testMinibatchSource.GetNextMinibatch((uint)batchSize, device);
        if (minibatchData == null || minibatchData.Count == 0)
            break;
        totalCount += (int)minibatchData[featureStreamInfo].numberOfSamples;

        // expected labels are in the mini batch data.
        var labelData = minibatchData[labelStreamInfo].data.GetDenseData<float>(label);
        var expectedLabels = labelData.Select(l => l.IndexOf(l.Max())).ToList();

        var inputDataMap = new Dictionary<Variable, Value>() {
            { feature, minibatchData[featureStreamInfo].data }
        };

        var outputDataMap = new Dictionary<Variable, Value>() {
            { label, null }
        };

        ffnn_model.Evaluate(inputDataMap, outputDataMap, device);
        var outputData = outputDataMap[label].GetDenseData<float>(label);
        var actualLabels = outputData.Select(l => l.IndexOf(l.Max())).ToList();

        int misMatches = actualLabels.Zip(expectedLabels, (a, b) => a.Equals(b) ? 0 : 1).Sum();

        miscountTotal += misMatches;
        Console.WriteLine($"Validating Model: Total Samples = {totalCount}, Mis-classify Count = {miscountTotal}");

        if (totalCount >= 20)
            break;
    }
    Console.WriteLine($"---------------");
    Console.WriteLine($"------TESTING SUMMARY--------");
    float accuracy = (1.0F - miscountTotal / totalCount);
    Console.WriteLine($"Model Accuracy = {accuracy}");
    return;

}

The implemented method is called in the previous Training method.

 EvaluateIrisModel(ffnn_model, trainer, device);

As can be seen the model validation has shown that the model predicts the data with high accuracy, which is shown on the following picture.

This was the latest post in series of blog posts about using Feed forward neural networks to train the Iris data using CNTK and C#.

The full source code for all three samples can be found here.

Advertisements

Train Iris data by Batch using CNTK and C#


In the previous post we have seen how to train NN model by using MinibatchSource. Usually we should use it when we have large amount of data. In case of small amount of the data, all data can be loaded in memory, and all can be passed to each iteration in order to train the model. This blog post will implement this kind of feeding the trainer.
We will reused the previous implementation, so the starting point can be previous source code. For data loading we have to define a new method. The Iris data is stored in text format like the following:

sepal_length,sepal_width,petal_length,petal_width,species
5.1,3.5,1.4,0.2,setosa(1 0 0)
7.0,3.2,4.7,1.4,versicolor(0 1 0)
7.6,3.0,6.6,2.1,virginica(0 0 1)
...

The output column is encoded to 1-N-1 encoding rule we have seen previously.
The method will read all the data from the file, parse the data and create two float arrays:

  • float[] feature, and
  • float[] label.

As can be seen both arrays are 1D, which means all data will be inserted in 1D, because the CNTK requires so.  Since the data is in 1D array, we should also provide the dimensionality of the data so te CNTK can resolve what values for each features. The following listing shows the loading Iris data in two 1D array returned as tuple.

static (float[], float[]) loadIrisDataset(string filePath, int featureDim, int numClasses)
{
    var rows = File.ReadAllLines(filePath);
    var features = new List<float>();
    var label = new List<float>();
    for (int i = 1; i < rows.Length; i++)
    {
        var row = rows[i].Split(',');
        var input = new float[featureDim];
        for (int j = 0; j < featureDim; j++)
        {
            input[j] = float.Parse(row[j], CultureInfo.InvariantCulture);
        }
        var output = new float[numClasses];
        for (int k = 0; k < numClasses; k++)
        {
            int oIndex = featureDim + k;
            output[k] = float.Parse(row[oIndex], CultureInfo.InvariantCulture);
        }

        features.AddRange(input);
        label.AddRange(output);
    }

    return (features.ToArray(), label.ToArray());
}

Once the data is loaded we should change very little amount of the previous code in order to implement batching instead of using minibatchSource. At the beginning we provides several variable to define the NN model structure. Then we call the loadIrisDataset, and define xValues and yValues, which we use in order to create feature and label input variables. Then we create dictionary which connect the feature and labels with data values which we will pass to the trainer later.
The next code is the same as in the previous version in order to create NN model, Loss and Evaluation functions, and learning rate.

Then we create loop, for 800 iteration. Once the iteration reaches the maximum value the program outputs the model properties and terminates.
Above said it implemented in the following code.

public static void TrainIriswithBatch(DeviceDescriptor device)
{
    //data file path
    var iris_data_file = "Data/iris_with_hot_vector.csv";

    //Network definition
    int inputDim = 4;
    int numOutputClasses = 3;
    int numHiddenLayers = 1;
    int hidenLayerDim = 6;
    int sampleSize = 130;

    //load data in to memory
    var dataSet = loadIrisDataset(iris_data_file, inputDim, numOutputClasses);

    // build a NN model
    //define input and output variable
    var xValues = Value.CreateBatch<float>(new NDShape(1, inputDim), dataSet.Item1, device);
    var yValues    = Value.CreateBatch<float>(new NDShape(1, numOutputClasses), dataSet.Item2, device);

    // build a NN model
    //define input and output variable and connecting to the stream configuration
    var feature = Variable.InputVariable(new NDShape(1, inputDim), DataType.Float);
    var label = Variable.InputVariable(new NDShape(1, numOutputClasses), DataType.Float);

    //Combine variables and data in to Dictionary for the training
    var dic = new Dictionary<Variable, Value>();
    dic.Add(feature, xValues);
    dic.Add(label, yValues);

    //Build simple Feed Froward Neural Network model
    // var ffnn_model = CreateMLPClassifier(device, numOutputClasses, hidenLayerDim, feature, classifierName);
    var ffnn_model = createFFNN(feature, numHiddenLayers, hidenLayerDim, numOutputClasses, Activation.Tanh, "IrisNNModel", device);

    //Loss and error functions definition
    var trainingLoss = CNTKLib.CrossEntropyWithSoftmax(new Variable(ffnn_model), label, "lossFunction");
    var classError = CNTKLib.ClassificationError(new Variable(ffnn_model), label, "classificationError");

    // set learning rate for the network
    var learningRatePerSample = new TrainingParameterScheduleDouble(0.001125, 1);

    //define learners for the NN model
    var ll = Learner.SGDLearner(ffnn_model.Parameters(), learningRatePerSample);

    //define trainer based on ffnn_model, loss and error functions , and SGD learner
    var trainer = Trainer.CreateTrainer(ffnn_model, trainingLoss, classError, new Learner[] { ll });

    //Preparation for the iterative learning process
    //used 800 epochs/iterations. Batch size will be the same as sample size since the data set is small
    int epochs = 800;
    int i = 0;
    while (epochs > -1)
    {
        trainer.TrainMinibatch(dic, device);

        //print progress
        printTrainingProgress(trainer, i++, 50);

        //
        epochs--;
    }
    //Summary of training
    double acc = Math.Round((1.0 - trainer.PreviousMinibatchEvaluationAverage()) * 100, 2);
    Console.WriteLine($"------TRAINING SUMMARY--------");
    Console.WriteLine($"The model trained with the accuracy {acc}%");
}

If we run the code, the output will be the same as we got from the previous blog post example:

The full source code with training data can be downloaded
here.

Train Iris data with MinibatchSource using CNTK and C#


So far (post1, post2, post3) we have seen, what is CNTK, how to use it with Python, and how to create simple C# .NET application and call basic CNTK methods. For this blog post we are going to implement full C# program to train Iris data.

The first step in using CNTK is how to get the data and feed the trainer. In the previous post we prepared the Iris data in CNTK format, which is suitable when using MinibatchSource. In order to use the MinibatchSource , we need to create two streams:

  • one for the features and
  • one for the label.

Also features and label variables must be created using the streams as well, so that when accessing the data by using variables the trainer is aware that the data is coming from the file.

Data preparation

As mentioned above we are going to use CNTK MinibatchSource to load the Iris data.
The two files are prepared for this demo:

var dataPath = Path.Combine(dataFolder, "iris_with_hot_vector.csv");
var trainPath = Path.Combine(dataFolder, "iris_with_hot_vector_test.csv");

One file path contains the Iris data for the training, and the second path contains the data for testing, which will be used in the future post. Those two files will be arguments when creating minibatchSource for the training and validation respectively.

The first step in getting the data from the file is defining the stream configuration with proper information. Those information will be used when the data would be extracted from the file. The configuration is completed by providing the number of features and the number of the one-hot vector component of the label in the file, as well as the names of features and labels. At the end of the blog post the data is attached so the reader can see how data is prepare for the minibatchSource.

The following code defines the stream configuration for the Iris data set.

//stream configuration to distinct features and labels in the file
var streamConfig = new StreamConfiguration[]
  {
    new StreamConfiguration(featureStreamName, inputDim),
    new StreamConfiguration(labelsStreamName, numOutputClasses)
  };

Also features and label variables must be created by providing above the stream names.

//define input and output variable and connecting to the stream configuration
var feature = Variable.InputVariable(new  NDShape(1,inputDim), DataType.Float, featureStreamName);
var label = Variable.InputVariable(new NDShape(1, numOutputClasses), DataType.Float, labelsStreamName);

Now the input and the output variables are connected with the data from the file, and minibachSource can handle them.

Creating Feed Forward Neural Network Model

Once we defined the stream and variables, we can defined the network model. The CNTK is implemented so that you can defined any number of hidden layers with any activation function.
For this demo we are going to create simple feed forward neural network with one hidden layer. The picture below show the NN model.

In order to implement above NN  model we need to implement three methods:

  • static Function applyActivationFunction(Function layer, NNActivation actFun)
  • static Function simpleLayer(Function input, int outputDim, DeviceDescriptor device)
  • static Function createFFNN(Function input, int hiddenLayerCount, int hiddenDim, int outputDim, NNActivation activation, string modelName, DeviceDescriptor device)

The first method just apply specified activation function for the passed layer. The method is very simple and should looks like:

static Function applyActivationFunction(Function layer, Activation actFun)
{
    switch (actFun)
    {
        default:
        case Activation.None:
            return layer;
        case Activation.ReLU:
            return CNTKLib.ReLU(layer);
        case Activation.Sigmoid:
            return CNTKLib.Sigmoid(layer);
        case Activation.Tanh:
            return CNTKLib.Tanh(layer);
    }
}

The method takes the layer as argument and return the layer with applied activation function.

The next method is creation of the simple layer with n weights and one bias. The method is shown on the following listing.

static Function simpleLayer(Function input, int outputDim, DeviceDescriptor device)
{
    //prepare default parameters values
    var glorotInit = CNTKLib.GlorotUniformInitializer(
            CNTKLib.DefaultParamInitScale,
            CNTKLib.SentinelValueForInferParamInitRank,
            CNTKLib.SentinelValueForInferParamInitRank, 1);

    //create weight and bias vectors
    var var = (Variable)input;
    var shape = new int[] { outputDim, var.Shape[0] };
    var weightParam = new Parameter(shape, DataType.Float, glorotInit, device, "w");
    var biasParam = new Parameter(new NDShape(1,outputDim), 0, device, "b");

    //construct W * X + b matrix
    return CNTKLib.Times(weightParam, input) + biasParam;
}

After initialization of the parameters, the Function object is created with number of output components and previous layer or the input variable. This is so called chain rule in NN layer creation. With this strategy the user can create very complex NN model.

The last method perform layers creation. It is called from the main method, and can create arbitrary feed forward neural network, by providing the parameters.

static Function createFFNN(Variable input, int hiddenLayerCount, int hiddenDim, int outputDim, Activation activation, string modelName, DeviceDescriptor device)
{
    //First the parameters initialization must be performed
    var glorotInit = CNTKLib.GlorotUniformInitializer(
            CNTKLib.DefaultParamInitScale,
            CNTKLib.SentinelValueForInferParamInitRank,
            CNTKLib.SentinelValueForInferParamInitRank, 1);

    //hidden layers creation
    //first hidden layer
    Function h = simpleLayer(input, hiddenDim, device);
    h = ApplyActivationFunction(h, activation);
    //2,3, ... hidden layers
    for (int i = 1; i < hiddenLayerCount; i++)
    {
        h = simpleLayer(h, hiddenDim, device);
        h = ApplyActivationFunction(h, activation);
    }
    //the last action is creation of the output layer
    var r  = simpleLayer(h, outputDim, device);
    r.SetName(modelName);
    return r;
}

Now that we have implemented method for NN model creation, the next step would be a training implementation.
The training process is iterative where the minibachSource feed the trainer for each iteration.
The Loss and the evaluation functions are calculated for each iteration, and shown in iteration progress. The iteration progress is defined by separate method which looks like the following code listing:

private static void printTrainingProgress(Trainer trainer, int minibatchIdx, int outputFrequencyInMinibatches)
{
    if ((minibatchIdx % outputFrequencyInMinibatches) == 0 && trainer.PreviousMinibatchSampleCount() != 0)
    {
        float trainLossValue = (float)trainer.PreviousMinibatchLossAverage();
        float evaluationValue = (float)trainer.PreviousMinibatchEvaluationAverage();
        Console.WriteLine($"Minibatch: {minibatchIdx} CrossEntropyLoss = {trainLossValue}, EvaluationCriterion = {evaluationValue}");
    }
}

During the iteration, the Loss function is constantly decreasing its value showing by indicating that the model is becoming better and better. Once the iteration process is completed, the model is shown in context of the accuracy of the training data.

Full program implementation

The following listing shows the complete source code implementation using CNTK for Iris data set training. At the beginning several variables are defined in order to define structure of NN model: the  number of input and output variables. Also the main method implements the iteration process where the minibatchSource handling with the data by passing the relevant data to the trainer. More about it will be in separate blog post. Once the iteration process is completed the model result is shows and the program terminates.

public static void TrainIris(DeviceDescriptor device)
{
    var dataFolder = "";//files must be on the same folder as program
    var dataPath = Path.Combine(dataFolder, "iris_with_hot_vector.csv");
    var trainPath = Path.Combine(dataFolder, "iris_with_hot_vector_test.csv");

    var featureStreamName = "features";
    var labelsStreamName = "labels";

    //Network definition
    int inputDim = 4;
    int numOutputClasses = 3;
    int numHiddenLayers = 1;
    int hidenLayerDim = 6;
    uint sampleSize = 130;

    //stream configuration to distinct features and labels in the file
    var streamConfig = new StreamConfiguration[]
        {
            new StreamConfiguration(featureStreamName, inputDim),
            new StreamConfiguration(labelsStreamName, numOutputClasses)
        };

    // build a NN model
    //define input and output variable and connecting to the stream configuration
    var feature = Variable.InputVariable(new NDShape(1, inputDim), DataType.Float, featureStreamName);
    var label = Variable.InputVariable(new NDShape(1, numOutputClasses), DataType.Float, labelsStreamName);

    //Build simple Feed Froward Neural Network model
    // var ffnn_model = CreateMLPClassifier(device, numOutputClasses, hidenLayerDim, feature, classifierName);
    var ffnn_model = CreateFFNN(feature, numHiddenLayers, hidenLayerDim, numOutputClasses, Activation.Tanh, "IrisNNModel", device);

    //Loss and error functions definition
    var trainingLoss = CNTKLib.CrossEntropyWithSoftmax(new Variable(ffnn_model), label, "lossFunction");
    var classError = CNTKLib.ClassificationError(new Variable(ffnn_model), label, "classificationError");

    // prepare the training data
    var minibatchSource = MinibatchSource.TextFormatMinibatchSource(
        dataPath, streamConfig, MinibatchSource.InfinitelyRepeat, true);
    var featureStreamInfo = minibatchSource.StreamInfo(featureStreamName);
    var labelStreamInfo = minibatchSource.StreamInfo(labelsStreamName);

    // set learning rate for the network
    var learningRatePerSample = new TrainingParameterScheduleDouble(0.001125, 1);

    //define learners for the NN model
    var ll = Learner.SGDLearner(ffnn_model.Parameters(), learningRatePerSample);

    //define trainer based on ffnn_model, loss and error functions , and SGD learner
    var trainer = Trainer.CreateTrainer(ffnn_model, trainingLoss, classError, new Learner[] { ll });

    //Preparation for the iterative learning process
    //used 800 epochs/iterations. Batch size will be the same as sample size since the data set is small
    int epochs = 800;
    int i = 0;
    while (epochs > -1)
    {
        var minibatchData = minibatchSource.GetNextMinibatch(sampleSize, device);
        //pass to the trainer the current batch separated by the features and label.
        var arguments = new Dictionary<Variable, MinibatchData>
        {
            { feature, minibatchData[featureStreamInfo] },
            { label, minibatchData[labelStreamInfo] }
        };

        trainer.TrainMinibatch(arguments, device);

        Helper.PrintTrainingProgress(trainer, i++, 50);

        // MinibatchSource is created with MinibatchSource.InfinitelyRepeat.
        // Batching will not end. Each time minibatchSource completes an sweep (epoch),
        // the last minibatch data will be marked as end of a sweep. We use this flag
        // to count number of epochs.
        if (minibatchData.Values.Any(a => a.sweepEnd))
        {
            epochs--;
        }
    }
    //Summary of training
    double acc = Math.Round((1.0 - trainer.PreviousMinibatchEvaluationAverage()) * 100, 2);
    Console.WriteLine($"------TRAINING SUMMARY--------");
    Console.WriteLine($"The model trained with the accuracy {acc}%");

    //// validate the model
    // this will be posted as separate blog post
}

The full source code with formatted Iris data set for training can be found here.

GPdotNET v4.0 has been released


After almost two years of implementation, I am proud to announce the forth version of the Open source project called GPdotNET v4.0. The latest version completely implements Genetic Programming and Artificial Neural Network for supervised learning tasks in three kind of problems: regression, binary and multiclass classification. Beside supervised learning tasks, with GPdotNET you can solve several Linear Programming problems: Traveling Salesman, Assignment and Transportation problems. The source code and binaries can be download from Github page: https://github.com/bhrnjica/gpdotnet/releases/tag/v4.0

Figure 1. Main Window in GPdotNET v4.0

Introduction

In 2006 the GPdotNET started as post-graduate semester project, where I was trying to implement simple C# program based on genetic programming. After successfully implemented console application, started to implement .NET Windows application to be easy to use for anyone who wants to build mathematical model from the data based on genetic programming method. In November 2009 GPdotNET became an open source project, by providing the source code and installer. Since then I have received hundreds of emails, feedbacks, questions and comments. The project was hosted on http://gpdotnet.codeplex.com. In 2016 I decided to move the project to GitHub for better collaboration and compatibility, and can be found at http://github.com/bhrnjica/gpdotnet. However, for backward compatibility, the old hosting site will be live as long as the codeplex.com would be live. Since the beginning of the development, my intention was that the GPdotNET would be cross-OS application which can be run on Windows, Linux and Mac. Since version 2, GPdotNET can be compiled against .NET and Mono, and can be run on any OS which has Mono Framework installed. Beside this fact, vast majority of users are using GPdotNET on Windows OS.

GPdotNET is primarily used on Academia by helping engineers and researchers in modelling and prediction various problems, from the air pollution, water treatment, rainfall prediction, to the various modelling of machining processes, electrical engineering, vibration, automotive industry etc. GPdotNET is used in more than ten doctoral dissertations (known to me) and master thesis, nearly hundreds paper used GPdotNET in some kind of calculation.

Modeling with GPdotNET (New in GPdotNET v4.0)

Working with GPdotNET requires the data. By providing the learning algorithms GPdotNET uses a data of the research or experimental measures to learn about the problem. The results of learning algorithms are analytical models which can describe or predict the state of the problem, or can recognize the pattern. GPdotNET is very easy to use, even if you have no deep knowledge of GA, GP or ANN. Appling those methods in finding solutions can be achieved very quickly. The project can be used in modeling any kind of engineering process, which can be described with discrete data, as well as in education during teaching students about evolutionary methods, mainly GP and GA, as well as Artificial Neural Networks.

Working in GPdotNET follows the same procedures regardless of the problem type. That means you have the same set of steps when modelling with Genetic Programming or Neural Networks. In fact, GPdotNET contains the same set of input dialogs when you try to solve Traveling Salesman Problem with Genetic Algorithm or if you try to solve handwriting recognition by using Backpropagation Neural Networks. All learning algorithms within GPdotNET share the same UI.

The picture below shows the flowchart of the modelling in GPdotNET. The five steps are depicted in the graphical forms surrounded with Start and Stop item.

Figure 2. Modelling layout in GPdotNET 4.0

After GPdotNET is started main window is show, and the modelling process can be started.

Choosing the Solver Type

The first step is choosing the type of the solver. Which solver you will use it depends on your intention what you want to do. Choosing solver type begins when you press “New” button, the “GPdotNET Model creation wizard” appear. Soler types are grouped in two categories. The first group (on the left side) contains models implemented prior to v4.0 version. It contains solvers which apply GP in modelling regression problems, and GP in optimization of the GP models. In addition, you can perform optimization of any analytically defined function by using “Optimization of the Analytic function”. Also, there are three linear programming problems which GPdotNET can solve using GA.

On the right side, there are two kind of solvers: GP or ANN, which are not limited to solve only regression. Both GP or ANN can build model for regression, binary or multi-class problems. Which type of problem GPdotNET will use, depends of the type of the output column data (label column).

Figure 3. Available model types

Loading Experimental Data (new in GPdotNET 4.0)

GPdotNET uses powerful tool for importing your experimental data regardless of the type. You can import numerical, binary or classification data by using Importing Data Wizard. With GPdotNET importing tools you can import any kind of textual data, with any kind of separation character.

Figure 4. Importing dataset dialog

After the data is imported in forms of columns and rows, GPdotNET implemented set of very simple controls which can perform very powerful feature engineering. For each loaded column, you can set several types of metadata: column name, column type (input, output, ignore), normalization type (minmax, gauss), and missing value (min, max, avg). With those options, you can achieve most of the modelling scenarios. Before “Start Modelling” minimum conditions must be achieved.

  • At least one column must be of “input” parameter type.
  • At least one column must be of “output” parameter type.

Which type of problem (regression, binary or multi class) will be used depends of the type of the output column. The following cases are considered:

  1. in case of regression problems ouput column must be of numeric type.
  2. in case of binary classinfication output column must be of binary type.
  3. in case of multi class classinfication output column must be of categorical type.

Figure 5. Defining metadata for training data set

When the column should not be part of the feature list, it can be easily ignored when the Column Type is set to “ignore“, or Param type is set to “string“.

Figure 6. Changing column type to binary

Change value of metadata by double click on the current value, select new values from available popup list. When you done with Feature Engineering press “Start Modelling” button and the process of modelling can be start.

Note: After you press Start Modelling button you can still change values of metadta, but after every change of the metadata values, Start Modelling button must be pressed.

Setting Learning Parameters

Figure 7. Setting parameters Dialog

After data is loaded and prepared successfully, you have to set parameters for the selected method. GPdotNET provides various parameters for each method, so you can set parameters which can provides and generates best output model. Every parameter is self-explanatory.

Searching for the solution

GPdotNET provides visualization of the searching solution so you can visually monitor how GPdotNET finds better solution as the iteration number is increasing. Beside searching simulation, GPdotNET provides instant result representation (only GP models), so any time the user can see what is the best solution, and how currently best solution is good against validate or predicted set of data. (Result and Prediction tabs).

Figure 8. Searching simulation in GPdotNET

Saving and exporting the results:

GPdotNET provides several options you can choose while exporting your solution. You can export your solution in Excel or text file, as well as in Wolfram Mathematica or R programming languages (GP Models only). In case of ANN model the result can se exported only to Excel.


Figure 9. Searching simulation in GPdotNET

Besides parameters specific to learning algorithm, GPdotNET provides set of parameters which control the way of how iteration process should terminates as well as how iteration process should be processed by means of parallelization to use the multicore processors. During the problem searching GPdotNET records the history, so you can see when the best solution is found, how much time pass since the last iteration process start, or how much time is remaining to finish currently running iteration process.

Due to the fact that GP is the method which requires lot of processing time, GPdotNET provides parallelization, which speed up the process of searching. Enabling or disabling the parallelization processing is just a click of the button.

GPdotNET Start Page

In case you have no data or just want to test the application, GPdotNET providers 15 data samples for demo purposes. All samples are grouped in problems specific groups: Approximation and Regressions, Binary Classification, Multi-class classification, Time series modelling and Linear Programming.

Figure 10. Modelling layout in GPdotNET

By click on appropriate link sample can be opened to see current result and parameter values. You can easily change parameter, press Run button and search for another solution. This is very handy to introduce with GPdotNET. In any time, you can stop searching and export current model or save current state of the program.

Final note: The project is licensed under GNU Library General Public License (LGPL). For information about license and other kind of copyright e.g. using the application in commercial purpose please see http://github.com/bhrnjica/gpdotnet/blob/master/license.md.

In case you need to cite it in scientific paper or book please refer to  https://wordpress.com/post/bhrnjica.net/5995