Liner Regression with CNTK and C#


CNTK is Microsoft’s deep learning tool for training very large and complex neural network models. However, you can use CNTK for various other purposes. In some of the previous posts we have seen how to use CNTK to perform matrix multiplication, in order to calculate descriptive statistics parameters on data set.
In this blog post we are going to implement simple linear regression model, LR. The model contains only one neuron. The model also contains bias parameters, so in total the linear regression has only two parameters: w and b.
The image below shows LR model:

The reason why we use the CNTK to solve such a simple task is very straightforward. Learning on simple models like this one, we can see how the CNTK library works, and see some of not-so-trivial actions in CNTK.
The model shown above can be easily extend to logistic regression model, by adding activation function. Besides the linear regression which represent the neural network configuration without activation function, the Logistic Regression is the simplest neural network configuration which includes activation function.

The following image shows logistic regression model:
In case you want to see more info about how to create Logistic Regression with CNTK, you can see this official demo example.
Now that we made some introduction to the neural network models, we can start by defining the data set. Assume we have simple data set which represent the simple linear function y=2x+1. The generated data set is shown in the following table:

We already know that the linear regression parameters for presented data set are: b_0=1 and b_1=2, so we want to engage the CNTK library in order to get those values, or at least parameter values which are very close to them.

All task about how the develop LR model by using CNTK can be described in several steps:

Step 1: Create C# Console application in Visual Studio, change the current architecture to x64, and add the latest “CNTK.GPU “ NuGet package in the solution. The following image shows those action performed in Visual Studio.

Step 2: Start writing code by adding two variables: X – feature, and label Y. Once the variables are defined, start with defining the training data set by creating batch. The following code snippet shows how to create variables and batch, as well as how to start writing CNTK based C# code.

First we need to add some using statements, and define the device where computation will be happen. Usually, we can defined CPU or GPU in case the machine contains NVIDIA compatible graphics card. So the demo starts with the following cod snippet:

using System;
using System.Linq;
using System.Collections.Generic;
using CNTK;
namespace LR_CNTK_Demo
{
    class Program
    {
        static void Main(string[] args)
        {
             //Step 1: Create some Demo helpers
             Console.Title = "Linear Regression with CNTK!";
             Console.WriteLine("#### Linear Regression with CNTK! ####");
             Console.WriteLine("");
            //define device
            var device = DeviceDescriptor.UseDefaultDevice();

Now define two variables, and data set presented in the previous table:

//Step 2: define values, and variables
Variable x = Variable.InputVariable(new int[] { 1 }, DataType.Float, "input");
Variable y = Variable.InputVariable(new int[] { 1 }, DataType.Float, "output");

//Step 2: define training data set from table above
var xValues = Value.CreateBatch(new NDShape(1, 1), new float[] { 1f, 2f, 3f, 4f, 5f }, device);
var yValues = Value.CreateBatch(new NDShape(1, 1), new float[] { 3f, 5f, 7f, 9f, 11f }, device);

Step 3: Create linear regression network model, by passing input variable and device for computation. As we already discussed, the model consists of one neuron and one bias parameter. The following method implements LR network model:

private static Function createLRModel(Variable x, DeviceDescriptor device)
{
    //initializer for parameters
    var initV = CNTKLib.GlorotUniformInitializer(1.0, 1, 0, 1);

    //bias
    var b = new Parameter(new NDShape(1,1), DataType.Float, initV, device, "b"); ;

    //weights
    var W = new Parameter(new NDShape(2, 1), DataType.Float, initV, device, "w");

    //matrix product
    var Wx = CNTKLib.Times(W, x, "wx");

    //layer
    var l = CNTKLib.Plus(b, Wx, "wx_b");

    return l;
}

First, we create initializer, which will initialize startup values of network parameters. Then we defined bias and weight parameters, and join them in form of linear model “wx+b”, and returned as Function type. The createModel function is called in the main method. Once the model is created, we can exam it, and prove there are only two parameters in the model. The following code create the Linear Regression model, and print model parameters:

//Step 3: create linear regression model
var lr = createLRModel(x, device);
//Network model contains only two parameters b and w, so we query
//the model in order to get parameter values
var paramValues = lr.Inputs.Where(z => z.IsParameter).ToList();
var totalParameters = paramValues.Sum(c => c.Shape.TotalSize);
Console.WriteLine($"LRM has {totalParameters} params, {paramValues[0].Name} and {paramValues[1].Name}.");

In the previous code, we have seen how to extract parameters from the model. Once we have parameters, we can change its values, or just print those values for the further analysis.

Step 4: Create Trainer, which will be used to train network parameters w and b. The following code snippet shows implementation of Trainer method.

public Trainer createTrainer(Function network, Variable target)
{
    //learning rate
    var lrate = 0.082;
    var lr = new TrainingParameterScheduleDouble(lrate);
//network parameters
    var zParams = new ParameterVector(network.Parameters().ToList());

    //create loss and eval
    Function loss = CNTKLib.SquaredError(network, target);
    Function eval = StatMetrics.RMSError(network, target);

    //learners
    //
    var llr = new List();
    var msgd = Learner.SGDLearner(network.Parameters(), lr);
    llr.Add(msgd);

    //trainer
    var trainer = Trainer.CreateTrainer(network, loss, eval, llr);
    //
    return trainer;
}

First we defined learning rate the main neural network parameter. Then we create Loss and Evaluation functions. With those parameters we can create SGD learner. Once the SGD learner object is instantiated, the trainer is created by calling CreateTrainer static CNTK method, and passed it further as function return. The method createTrainer is called in the main method:

//Step 4: create trainer
var trainer = createTrainer(lr, y);

Step 5: Training process: Once the variables, data set, network model and trainer are defined, the training process can be started.

//Ştep 5: training
for (int i = 1; i <= 200; i++)
{
var d = new Dictionary();
d.Add(x, xValues);
d.Add(y, yValues);
//
trainer.TrainMinibatch(d, true, device);
//
var loss = trainer.PreviousMinibatchLossAverage();
var eval = trainer.PreviousMinibatchLossAverage();
//
if (i % 20 == 0)
    Console.WriteLine($"It={i}, Loss={loss}, Eval={eval}");

if(i==200)
 {
    //print weights
    var b0_name = paramValues[0].Name;
    var b0 = new Value(paramValues[0].GetValue()).GetDenseData(paramValues[0]);
    var b1_name = paramValues[1].Name;
    var b1 = new Value(paramValues[1].GetValue()).GetDenseData(paramValues[1]);
    Console.WriteLine($" ");
    Console.WriteLine($"Training process finished with the following regression parameters:");
    Console.WriteLine($"b={b0[0][0]}, w={b1[0][0]}");
    Console.WriteLine($" ");
 }
}
}

As can be seen, in just 200 iterations, regression parameters got the values we almost expected b_0=0.995, and w=2.005. Since the training process is different than classic regression parameter determination, we cannot get exact values. In order to estimate regression parameters, the neural network uses iteration methods called Stochastic Gradient Decadent, SGD. On the other hand, classic regression uses regression analysis procedures by minimizing the least square error, and solve system equations where unknowns are b and w.
Once we implement all code above, we can start LR demo by pressing F5. Similar output window should be shown:

Hope this blog post can provide enough information to start with CNTK C# and Machine Learning. Source code for this blog post can be downloaded here.

Advertisements

Input normalization as separate layer in CNTK with C#


In the previous post, we have seen how to calculate some of basis parameters of descriptive statistics, as well as how to normalize data by calculating  mean and standard deviation. In this blog post we are going to implement data normalization as regular neural network layer, which can simplify the training process and data preparation.

What is Data normalization?

Simple said, data normalization is set of tasks which transform values of any feature in a data set into predefined number range. Usually this range is [-1,1] , [0,1] or some other specific ranges. Data normalization plays very important role in ML, since it can dramatically improve the training process, and simplify settings of network parameters.

There are two main types of data normalization:
– MinMax normalization – which transforms all values into range of [0,1],
– Gauss Normalization or Z score normalization, which transforms the value in such a way that the average value is zero, and std is 1.

Beside those types there are plenty of other methods which can be used. Usually those two are used when the size of the data set is known, otherwise we should use some of the other methods, like log scaling, dividing every value with some constant, etc. But why data need to be normalized? This is essential question in ML, and the simplest answer is to provide the equal influence to all features to change the output label. More about data normalization and scaling can be found on this link.

In this blog post we are going to implement CNTK neural network which contain a “Normalization layer” between input and first hidden layer. The schematic picture of the network looks like the following image:

As can be observed, the Normalization layer is placed between input and first hidden layer. Also the Normalization layer contains the same neurons as input layer and produced the  output with the same dimension as the input layer.

In order to implement Normalization layer the following requirements must be met:

  • calculate average  \mu and standard deviation \sigma in training data set as well find maximum and minimum value of each feature.
  • this must be done prior to neural network model creation, since we need those values in the normalization layer.
  • within network model creation, the normalization layer should be define after input layer is defined.

Calculation of mean and standard deviation for training data set

Before network creation, we should prepare mean and standard deviation parameters which will be used in the Normalization layer as constants. Hopefully, the CNTK has the static method in the Minibatch source class for this purpose “MinibatchSource.ComputeInputPerDimMeansAndInvStdDevs”. The method takes the whole training data set defined in the minibatch and calculate the parameters.


//calculate mean and std for the minibatchsource
// prepare the training data
var d = new DictionaryNDArrayView, NDArrayView>>();
using (var mbs = MinibatchSource.TextFormatMinibatchSource(
trainingDataPath , streamConfig, MinibatchSource.FullDataSweep,false))
{
d.Add(mbs.StreamInfo("feature"), new Tuple(null, null));
//compute mean and standard deviation of the population for inputs variables
MinibatchSource.ComputeInputPerDimMeansAndInvStdDevs(mbs, d, device);

}

Now that we have average and std values for each feature, we can create network with normalization layer. In this example we define simple feed forward NN with 1 input, 1 normalization, 1 hidden and 1 output layer.


private static Function createFFModelWithNormalizationLayer(Variable feature, int hiddenDim,int outputDim, Tuple avgStdConstants, DeviceDescriptor device)
{
//First the parameters initialization must be performed
var glorotInit = CNTKLib.GlorotUniformInitializer(
CNTKLib.DefaultParamInitScale,
CNTKLib.SentinelValueForInferParamInitRank,
CNTKLib.SentinelValueForInferParamInitRank, 1);

//*******Input layer is indicated as feature
var inputLayer = feature;

//*******Normalization layer
var mean = new Constant(avgStdConstants.Item1, "mean");
var std = new Constant(avgStdConstants.Item2, "std");
var normalizedLayer = CNTKLib.PerDimMeanVarianceNormalize(inputLayer, mean, std);

//*****hidden layer creation
//shape of one hidden layer should be inputDim x neuronCount
var shape = new int[] { hiddenDim, 4 };
var weightParam = new Parameter(shape, DataType.Float, glorotInit, device, "wh");
var biasParam = new Parameter(new NDShape(1, hiddenDim), 0, device, "bh");
var hidLay = CNTKLib.Times(weightParam, normalizedLayer) + biasParam;
var hidLayerAct = CNTKLib.ReLU(hidLay);

//******Output layer creation
//the last action is creation of the output layer
var shapeOut = new int[] { 3, hiddenDim };
var wParamOut = new Parameter(shapeOut, DataType.Float, glorotInit, device, "wo");
var bParamOut = new Parameter(new NDShape(1, 3), 0, device, "bo");
var outLay = CNTKLib.Times(wParamOut, hidLayerAct) + bParamOut;
return outLay;
}

Complete Source Code Example

The whole source code about this example is listed below. The example show how to normalize input feature for Iris famous data set. Notice that when using such way of data normalization, we don’t need to handle  normalization for validation or testing data sets, because data normalization  is part of the network model.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using CNTK;
namespace NormalizationLayerDemo
{
    class Program
    {
        static string trainingDataPath = "./data/iris_training.txt";
        static string validationDataPath = "./data/iris_validation.txt";
        static void Main(string[] args)
        {
            DeviceDescriptor device = DeviceDescriptor.UseDefaultDevice();

            //stream configuration to distinct features and labels in the file
            var streamConfig = new StreamConfiguration[]
               {
                   new StreamConfiguration("feature", 4),
                   new StreamConfiguration("flower", 3)
               };

            // build a NN model
            //define input and output variable and connecting to the stream configuration
            var feature = Variable.InputVariable(new NDShape(1, 4), DataType.Float, "feature");
            var label = Variable.InputVariable(new NDShape(1, 3), DataType.Float, "flower");

            //calculate mean and std for the minibatchsource
            // prepare the training data
            var d = new Dictionary();
            using (var mbs = MinibatchSource.TextFormatMinibatchSource(
               trainingDataPath , streamConfig, MinibatchSource.FullDataSweep,false))
            {
                d.Add(mbs.StreamInfo("feature"), new Tuple(null, null));
                //compute mean and standard deviation of the population for inputs variables
                MinibatchSource.ComputeInputPerDimMeansAndInvStdDevs(mbs, d, device);

            }

            //Build simple Feed Froward Neural Network with normalization layer
            var ffnn_model = createFFModelWithNormalizationLayer(feature,5,3,d.ElementAt(0).Value, device);

            //Loss and error functions definition
            var trainingLoss = CNTKLib.CrossEntropyWithSoftmax(new Variable(ffnn_model), label, "lossFunction");
            var classError = CNTKLib.ClassificationError(new Variable(ffnn_model), label, "classificationError");

            // set learning rate for the network
            var learningRatePerSample = new TrainingParameterScheduleDouble(0.01, 1);

            //define learners for the NN model
            var ll = Learner.SGDLearner(ffnn_model.Parameters(), learningRatePerSample);

            //define trainer based on model, loss and error functions , and SGD learner
            var trainer = Trainer.CreateTrainer(ffnn_model, trainingLoss, classError, new Learner[] { ll });

            //Preparation for the iterative learning process

            // create minibatch for training
            var mbsTraining = MinibatchSource.TextFormatMinibatchSource(trainingDataPath, streamConfig, MinibatchSource.InfinitelyRepeat, true);

            int epoch = 1;
            while (epoch  a.sweepEnd))
                {
                    reportTrainingProgress(feature, label, streamConfig, trainer, epoch, device);
                    epoch++;
                }
            }
            Console.Read();
        }

        private static void reportTrainingProgress(Variable feature, Variable label, StreamConfiguration[] streamConfig,  Trainer trainer, int epoch, DeviceDescriptor device)
        {
            // create minibatch for training
            var mbsTrain = MinibatchSource.TextFormatMinibatchSource(trainingDataPath, streamConfig, MinibatchSource.FullDataSweep, false);
            var trainD = mbsTrain.GetNextMinibatch(int.MaxValue, device);
            //
            var a1 = new UnorderedMapVariableMinibatchData();
            a1.Add(feature, trainD[mbsTrain.StreamInfo("feature")]);
            a1.Add(label, trainD[mbsTrain.StreamInfo("flower")]);
            var trainEvaluation = trainer.TestMinibatch(a1);

            // create minibatch for validation
            var mbsVal = MinibatchSource.TextFormatMinibatchSource(validationDataPath, streamConfig, MinibatchSource.FullDataSweep, false);
            var valD = mbsVal.GetNextMinibatch(int.MaxValue, device);

            //
            var a2 = new UnorderedMapVariableMinibatchData();
            a2.Add(feature, valD[mbsVal.StreamInfo("feature")]);
            a2.Add(label, valD[mbsVal.StreamInfo("flower")]);
            var valEvaluation = trainer.TestMinibatch(a2);

            Console.WriteLine($"Epoch={epoch}, Train Error={trainEvaluation}, Validation Error={valEvaluation}");
        }

        private static Function createFFModelWithNormalizationLayer(Variable feature, int hiddenDim,int outputDim, Tuple avgStdConstants, DeviceDescriptor device)
        {
            //First the parameters initialization must be performed
            var glorotInit = CNTKLib.GlorotUniformInitializer(
                    CNTKLib.DefaultParamInitScale,
                    CNTKLib.SentinelValueForInferParamInitRank,
                    CNTKLib.SentinelValueForInferParamInitRank, 1);

            //*******Input layer is indicated as feature
            var inputLayer = feature;

            //*******Normalization layer
            var mean = new Constant(avgStdConstants.Item1, "mean");
            var std = new Constant(avgStdConstants.Item2, "std");
            var normalizedLayer = CNTKLib.PerDimMeanVarianceNormalize(inputLayer, mean, std);

            //*****hidden layer creation
            //shape of one hidden layer should be inputDim x neuronCount
            var shape = new int[] { hiddenDim, 4 };
            var weightParam = new Parameter(shape, DataType.Float, glorotInit, device, "wh");
            var biasParam = new Parameter(new NDShape(1, hiddenDim), 0, device, "bh");
            var hidLay = CNTKLib.Times(weightParam, normalizedLayer) + biasParam;
            var hidLayerAct = CNTKLib.ReLU(hidLay);

            //******Output layer creation
            //the last action is creation of the output layer
            var shapeOut = new int[] { 3, hiddenDim };
            var wParamOut = new Parameter(shapeOut, DataType.Float, glorotInit, device, "wo");
            var bParamOut = new Parameter(new NDShape(1, 3), 0, device, "bo");
            var outLay = CNTKLib.Times(wParamOut, hidLayerAct) + bParamOut;
            return outLay;
        }
    }
}

The output window should looks like:

2018-07-13_18-20-58

The data set files used in the example can be downloaded from here, and full source code demo from here.

Announcing GPdotNET v5 and related Book


https://www.igi-global.com/Images/Covers/9781522560050.png

After one year of writing and coding, finally I  can announce my two big achievements which are related to each other:

1. The fifth version of my open source project GPdotNET – genetic programming tool, and

2. The book:Optimized Genetic Programming Applications: Emerging Research and Opportunities, published by IGI-GLobal.

Along the book, I was developing GPdotNET application which is explained in Chapter 5. Actually the Chapter 5 described in depth all aspects of the application, with real world examples.

As can be seen GPdotNET v5 is completely rewritten application, with new logo and GUI. As Introduction of the application I have prepared several videos on youtube with quick explanation how to use some of the main modules in GPdotNET.

Using ANNdotNET – GUI tool to create CNTK based model for Iris data set


In this tutorial we are going to create and train Iris model using ANNdotNET.  ANNdotNET is windows application for creating and training CNTK based models without leaving GUI.

All procedures from downloading the data set, to exporting model, can be achieved in 6 steps.

1. Step: Download the data set file from https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data.

2. Step: Open ANNdotNET application. Press New command, select Project 1 tree item and rename the project  into Iris Data Set.

2. Step: Select Data Command from Model Preparation ribbon group, Click File button from Import experimenal data dialog and select the recently downloaded file. Check Comma check box and press Import Data button.

3. Steps: Double click on Scaling for each column, and select MinMax normalization option from the popup ComboBox list. Double click on Type for the output column, and select Category, and 1:N for encoding. More information how to prepare data for ML you can find at https://bhrnjica.net/2018/03/01/data-preparation-tool-for-machine-learning/

4. Steps: Once the data is prepared Click Create Model Command and Model Settings panel is shown. Setup parameters as shown on the image below and click Run command.

5. Steps: Once the model is trained you can evaluate model by selecting Evaluate Command. Depending on the model type (regression, Binary or Multi class classification) The appropriate Evaluation dialog appears. Since this is multi class classification model, the Confusion matrix is shows, with micro and macron performance parameters.

6. Steps: For further analysis you can export model to Excel, or into ONNX. Also you can save the project which can later be opened and retrained again.

Note: Currently ANNdotNET is in alpha version, and more feature will come in near future.

Announcement of GPdotNET v5 and ANNdotNET v1.0


As you already know GPdotNET v4 tool consists of several modules which include:

  • GP module for creating and training models based on genetic programming,
  • ANN module for creating and training models based on Feed Forward Neural Networks,
  • GA module for model and function optimization using Genetic Algorithm
  • LGA module is for  linear programming with GA which includes solving Traveling Salesman based problems, Assignment and Transportation problems.

With the latest release the GPdotNET has changed a lot. First of all, the initial idea about GPdotNET was to provide GP method in the application. And as the project grew lot of new implementations were included in the main project. This year I decided to make two different projects which can be seen as the natural evolution of GPdotNET v4.

The first project remain the same which follows the previous version and it is called GPdotNET v5. The project includes only GP related algorithm implementation which is developed for creating and training supervised ML problems (regression, binary and multi-class classification).

The second project uses several ANN algorithms for creating and training supervised machine learning problems.  The project is called ANNdotNET. It is Windows Forms desktop application very similar with GPdotNET, for creating and training ANN models.

I am very prod to announce that the new version of GPdotNET will be released as two  different open source projects.

gpdotnet-evolution

  1. GPdotNET v5 – which is hosted at the same address as previous. The older version GPdotNET v4 has moved at http://github.com/bhrnjica/gpdotnetv4  – and will be the latest version for non GP and ANN modules in GPdotNET.
  2. ANNdotNET v1 – is hosted at separate repository http://github.com/bhrnjica/anndotnet.