Descriptive statistics and data normalization with CNTK and C#


As you probably know CNTK is Microsoft Cognitive Toolkit for deep learning. It is open source library which is used by various Microsoft products. Also the CNTK is powerful library for developing custom ML solutions from various fields with different platforms and languages. What is also so powerful in the CNTK is the way of the implementation. In fact the library is implemented as series of computation graphs, which  is fully elaborated into the sequence of steps performed in a deep neural network training.

Each CNTK compute graph is created with set of nodes where each node represents numerical (mathematical) operation. The edges between nodes in the graph represent data flow between operations. Such a representation allows CNTK to schedule computation on the underlying hardware GPU or CPU. The CNTK can dynamically analyze the graphs in order to to optimize both latency and efficient use of resources. The most powerful part of this is the fact thet the CNTK can calculate derivation of any constructed set of operations, which can be used for efficient learning  process of the network parameters. The flowing image shows the core architecture of the CNTK.

On the other hand, any operation can be executed on CPU or GPU with minimal code changes. In fact we can implement method which can automatically takes GPU computation if available. The CNTK is the first .NET library which provide .NET developers to develop GPU aware .NET applications.

What this exactly mean is that with this powerful library you can develop complex math computation directly to GPU in .NET using C#, which currently is not possible when using standard .NET library.

For this blog post I will show how to calculate some of basic statistics operations on data set.

Say we have data set with 4 columns (features) and 20 rows (samples). The C# implementation of this 2D array is show on the following code snippet:

static float[][] mData = new float[][] {
new float[] { 5.1f, 3.5f, 1.4f, 0.2f},
new float[] { 4.9f, 3.0f, 1.4f, 0.2f},
new float[] { 4.7f, 3.2f, 1.3f, 0.2f},
new float[] { 4.6f, 3.1f, 1.5f, 0.2f},
new float[] { 6.9f, 3.1f, 4.9f, 1.5f},
new float[] { 5.5f, 2.3f, 4.0f, 1.3f},
new float[] { 6.5f, 2.8f, 4.6f, 1.5f},
new float[] { 5.0f, 3.4f, 1.5f, 0.2f},
new float[] { 4.4f, 2.9f, 1.4f, 0.2f},
new float[] { 4.9f, 3.1f, 1.5f, 0.1f},
new float[] { 5.4f, 3.7f, 1.5f, 0.2f},
new float[] { 4.8f, 3.4f, 1.6f, 0.2f},
new float[] { 4.8f, 3.0f, 1.4f, 0.1f},
new float[] { 4.3f, 3.0f, 1.1f, 0.1f},
new float[] { 6.5f, 3.0f, 5.8f, 2.2f},
new float[] { 7.6f, 3.0f, 6.6f, 2.1f},
new float[] { 4.9f, 2.5f, 4.5f, 1.7f},
new float[] { 7.3f, 2.9f, 6.3f, 1.8f},
new float[] { 5.7f, 3.8f, 1.7f, 0.3f},
new float[] { 5.1f, 3.8f, 1.5f, 0.3f},};

If you want to play with CNTK and math calculation you need some knowledge from Calculus, as well as vectors, matrix and tensors. Also in CNTK any operation is performed as matrix operation, which may simplify the calculation process for you. In standard way, you have to deal with multidimensional arrays during calculations. As my knowledge currently there is no .NET library which can perform math operation on GPU, which constrains the .NET platform for implementation of high performance applications.

If we want to compute average value, and standard deviation for each column, we can do that with CNTK very easy way. Once we compute those values we can used them for normalizing the data set by computing standard score (Gauss Standardization).

The Gauss standardization is calculated by the flowing term:

nValue= \frac{X-\nu}{\sigma},
where X- is column values, \nu – column mean, and \sigma– standard deviation of the column.

For this example we are going to perform three statistic operations,and the CNTK automatically provides us with ability to compute those values on GPU. This is very important in case you have data set with millions of rows, and computation can be performed in few milliseconds.

Any computation process in CNTK can be achieved in several steps:

1. Read data from external source or in-memory data,
2. Define Value and Variable objects.
3. Define Function for the calculation
4. Perform Evaluation of the function by passing the Variable and Value objects
5. Retrieve the result of the calculation and show the result.

All above steps are implemented in the following implementation:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using CNTK;
namespace DataNormalizationWithCNTK
{
    class Program
    {
       static float[][] mData = new float[][] {
        new float[] { 5.1f, 3.5f, 1.4f, 0.2f},
        new float[] { 4.9f, 3.0f, 1.4f, 0.2f},
        new float[] { 4.7f, 3.2f, 1.3f, 0.2f},
        new float[] { 4.6f, 3.1f, 1.5f, 0.2f},
        new float[] { 6.9f, 3.1f, 4.9f, 1.5f},
        new float[] { 5.5f, 2.3f, 4.0f, 1.3f},
        new float[] { 6.5f, 2.8f, 4.6f, 1.5f},
        new float[] { 5.0f, 3.4f, 1.5f, 0.2f},
        new float[] { 4.4f, 2.9f, 1.4f, 0.2f},
        new float[] { 4.9f, 3.1f, 1.5f, 0.1f},
        new float[] { 5.4f, 3.7f, 1.5f, 0.2f},
        new float[] { 4.8f, 3.4f, 1.6f, 0.2f},
        new float[] { 4.8f, 3.0f, 1.4f, 0.1f},
        new float[] { 4.3f, 3.0f, 1.1f, 0.1f},
        new float[] { 6.5f, 3.0f, 5.8f, 2.2f},
        new float[] { 7.6f, 3.0f, 6.6f, 2.1f},
        new float[] { 4.9f, 2.5f, 4.5f, 1.7f},
        new float[] { 7.3f, 2.9f, 6.3f, 1.8f},
        new float[] { 5.7f, 3.8f, 1.7f, 0.3f},
        new float[] { 5.1f, 3.8f, 1.5f, 0.3f},};
        static void Main(string[] args)
        {
            //define device where the calculation will executes
            var device = DeviceDescriptor.UseDefaultDevice();

            //print data to console
            Console.WriteLine($"X1,\tX2,\tX3,\tX4");
            Console.WriteLine($"-----,\t-----,\t-----,\t-----");
            foreach (var row in mData)
            {
                Console.WriteLine($"{row[0]},\t{row[1]},\t{row[2]},\t{row[3]}");
            }
            Console.WriteLine($"-----,\t-----,\t-----,\t-----");


            //convert data into enumerable list
            var data = mData.ToEnumerable<IEnumerable<float>>();

            
            //assign the values 
            var vData = Value.CreateBatchOfSequences<float>(new int[] {4},data, device);
            //create variable to describe the data
            var features = Variable.InputVariable(vData.Shape, DataType.Float);

            //define mean function for the variable
            var mean =  CNTKLib.ReduceMean(features, new Axis(2));//Axis(2)- means calculate mean along the third axes which represent 4 features
            
            //map variables and data
            var inputDataMap = new Dictionary<Variable, Value>() { { features, vData } };
            var meanDataMap = new Dictionary<Variable, Value>() { { mean, null } };

            //mean calculation
            mean.Evaluate(inputDataMap,meanDataMap,device);
            //get result
            var meanValues = meanDataMap[mean].GetDenseData<float>(mean);

            Console.WriteLine($"");
            Console.WriteLine($"Average values for each features x1={meanValues[0][0]},x2={meanValues[0][1]},x3={meanValues[0][2]},x4={meanValues[0][3]}");

            //Calculation of standard deviation
            var std = calculateStd(features);
            var stdDataMap = new Dictionary<Variable, Value>() { { std, null } };
            //mean calculation
            std.Evaluate(inputDataMap, stdDataMap, device);
            //get result
            var stdValues = stdDataMap[std].GetDenseData<float>(std);
            
            Console.WriteLine($"");
            Console.WriteLine($"STD of features x1={stdValues[0][0]},x2={stdValues[0][1]},x3={stdValues[0][2]},x4={stdValues[0][3]}");

            //Once we have mean and std we can calculate Standardized values for the data
            var gaussNormalization = CNTKLib.ElementDivide(CNTKLib.Minus(features, mean), std);
            var gaussDataMap = new Dictionary<Variable, Value>() { { gaussNormalization, null } };
            //mean calculation
            gaussNormalization.Evaluate(inputDataMap, gaussDataMap, device);

            //get result
            var normValues = gaussDataMap[gaussNormalization].GetDenseData<float>(gaussNormalization);
            //print data to console
            Console.WriteLine($"-------------------------------------------");
            Console.WriteLine($"Normalized values for the above data set");
            Console.WriteLine($"");
            Console.WriteLine($"X1,\tX2,\tX3,\tX4");
            Console.WriteLine($"-----,\t-----,\t-----,\t-----");
            var row2 = normValues[0];
            for (int j = 0; j < 80; j += 4)
            {
                Console.WriteLine($"{row2[j]},\t{row2[j + 1]},\t{row2[j + 2]},\t{row2[j + 3]}");
            }
            Console.WriteLine($"-----,\t-----,\t-----,\t-----");
        }

        private static Function calculateStd(Variable features)
        {
            var mean = CNTKLib.ReduceMean(features,new Axis(2));
            var remainder = CNTKLib.Minus(features, mean);
            var squared = CNTKLib.Square(remainder);
            //the last dimension indicate the number of samples
            var n = new Constant(new NDShape(0), DataType.Float, features.Shape.Dimensions.Last()-1);
            var elm = CNTKLib.ElementDivide(squared, n);
            var sum = CNTKLib.ReduceSum(elm, new Axis(2));
            var stdVal = CNTKLib.Sqrt(sum);
            return stdVal;
        }
    }

    public static class ArrayExtensions
    {
        public static IEnumerable<T> ToEnumerable<T>(this Array target)
        {
            foreach (var item in target)
                yield return (T)item;
        }
    }
}

The output for the source code above should look like:

Advertisements

CNTK 106 Tutorial – Time Series prediction with LSTM using C#


In this post will show how to implement CNTK 106 Tutorial in C#. This tutorial lecture is written in Python and there is no related example in C#. For this reason I decided to translate this very good tutorial into C#. The tutorial can be found at: CNTK 106: Part A – Time series prediction with LSTM (Basics)  and uses sin wave function in order to predict time series data. For this problem the Long Short Term Memory, LSTM, Recurrent Neural Network is used.

Goal

The goal of this tutorial is prediction the simulated data of a continuous function ( sin wave). From N previous values of the y=sin(t) function where y is the observed amplitude signal at time t, prediction of  M values of y is going to predict for the corresponding future time points.

The excitement of this tutorial is using the LSTM recurrent neural network which is nicely suited for this kind of problems. As you probably know LSTM is special recurrent neural network which has ability to learn from its experience during the training. More information about this fantastic version of recurrent neural network can be found here.

The blog post is divided into several sub-sections:

  1. Simulated data part
  2. LSTM Network
  3. Model training and evaluation

Since the simulated data set is huge, the original tutorial has two running mode which is described by the variable isFast. In case of fast mode, the variable is set to True, and this mode will be used in this tutorial. Later, the reader may change the value to False in order to see much better training model, but the training time will be much longer. The Demo for this this blog post exposes variables of the batch size and iteration number to the user, so the user may defined those numbers as he/she want.

Data generation

In order to generate simulated sin wave data, we are going to implement several helper methods. Let N and M  be a ordered set of past values and future (desired predicted values) of the sine wave, respectively. The two methods are implemented:

  1. generateWaveDataset()

The generateWaveDataset takes the periodic function,set of independent values (which is corresponded the time for this case) and generate the wave function, by providing the time steps and time shift. The method is related to the generate_data() python methods from the original tutorial.

static Dictionary<string, (float[][] train, float[][] valid, float[][] test)> loadWaveDataset(Func<double, double> fun, float[] x0, int timeSteps, int timeShift)
{
    ////fill data
    float[] xsin = new float[x0.Length];//all data
    for (int l = 0; l < x0.Length; l++)
        xsin[l] = (float)fun(x0[l]);


    //split data on training and testing part
    var a = new float[xsin.Length - timeShift];
    var b = new float[xsin.Length - timeShift];

    for (int l = 0; l < xsin.Length; l++)
    {
        //
        if (l < xsin.Length - timeShift) a[l] = xsin[l]; // if (l >= timeShift)
            b[l - timeShift] = xsin[l];
    }

    //make arrays of data
    var a1 = new List<float[]>();
    var b1 = new List<float[]>();
    for (int i = 0; i < a.Length - timeSteps + 1; i++)
    {
        //features
        var row = new float[timeSteps];
        for (int j = 0; j < timeSteps; j++)
            row[j] = a[i + j];
        //create features row
        a1.Add(row);
        //label row
        b1.Add(new float[] { b[i + timeSteps - 1] });
    }

    //split data into train, validation and test data set
    var xxx = splitData(a1.ToArray(), 0.1f, 0.1f);
    var yyy = splitData(b1.ToArray(), 0.1f, 0.1f);


    var retVal = new Dictionary<string, (float[][] train, float[][] valid, float[][] test)>();
    retVal.Add("features", xxx);
    retVal.Add("label", yyy);
    return retVal;
}

Once the data is generated, three datasets should be created: train, validate and test dataset, which are generated by splitting the dataset generated by the above method. The following splitData method splits the original sin wave dataset into three datasets,

static (float[][] train, float[][] valid, float[][] test) splitData(float[][] data, float valSize = 0.1f, float testSize = 0.1f)
{
    //calculate
    var posTest = (int)(data.Length * (1 - testSize));
    var posVal = (int)(posTest * (1 - valSize));

    return (data.Skip(0).Take(posVal).ToArray(), data.Skip(posVal).Take(posTest - posVal).ToArray(), data.Skip(posTest).ToArray());
}

In order to visualize the data, the Windows Forms project is created. Moreover, the ZedGraph .NET class library is used in order to visualize the data. The following picture shows the generated data.

Network modeling

As mentioned on the beginning of the blog post, we are going to create LSTM recurrent neural network, with 1 LSTM cell for each input. We have N inputs and each input is a value in our continuous function. The N outputs from the LSTM are the input into a dense layer that produces a single output. Between LSTM and dense layer we insert a dropout layer that randomly drops 20% of the values coming from the LSTM to prevent overfitting the model to the training dataset. We want use use the dropout layer during training but when using the model to make predictions we don’t want to drop values.

The description above can be illustrated on the following picture:

The implementation of the LSTM can be sumarize in one method, but the real implementation can be viewed in the demo sample which is attached with this blog post.
The following methods implements LSTM network depicted on the image above. The arguments for the method are already defined.

public static Function CreateModel(Variable input, int outDim, int LSTMDim, int cellDim, DeviceDescriptor device, string outputName)
{

    Func<Variable, Function> pastValueRecurrenceHook = (x) => CNTKLib.PastValue(x);

    //creating LSTM cell for each input variable
    Function LSTMFunction = LSTMPComponentWithSelfStabilization<float>(
        input,
        new int[] { LSTMDim },
        new int[] { cellDim },
        pastValueRecurrenceHook,
        pastValueRecurrenceHook,
        device).Item1;

    //after the LSTM sequence is created return the last cell in order to continue generating the network
    Function lastCell = CNTKLib.SequenceLast(LSTMFunction);

    //implement drop out for 10%
    var dropOut = CNTKLib.Dropout(lastCell,0.2, 1);

    //create last dense layer before output
    var outputLayer =  FullyConnectedLinearLayer(dropOut, outDim, device, outputName);

    return outputLayer;
}

Training the network

In order to train the model, the nextBatch() method is implemented that produces batches to feed the training function. Note that because CNTK supports variable sequence length, we must feed the batches as list of sequences. This is a convenience function to generate small batches of data often referred to as minibatch.

private static IEnumerable<(float[] X, float[] Y)> nextBatch(float[][] X, float[][] Y, int mMSize)
{

    float[] asBatch(float[][] data, int start, int count)
    {
        var lst = new List<float>();
        for (int i = start; i < start + count; i++) { if (i >= data.Length)
                break;

            lst.AddRange(data[i]);
        }
        return lst.ToArray();
    }

    for (int i = 0; i <= X.Length - 1; i += mMSize) { var size = X.Length - i; if (size > 0 && size > mMSize)
            size = mMSize;

        var x = asBatch(X, i, size);
        var y = asBatch(Y, i, size);

        yield return (x, y);
    }
}

Note: Since the this tutorial is implemented as WinForms C# project which can visualize training and testing datasets, as well as it  can show the best found model during the training process, there are lot of other implemented methods which are not mentioned here, but can be found in the demo source code attached in this blog post.

Key Insight

When working with LSTM the user should pay attention on the following:

Since LSTM must work with axes with unknown dimensions, the variables should be defined on different way as we could saw in the previous blog posts. So the input and the output variable are initialized with the following code listing:

// build the model
var feature = Variable.InputVariable(new int[] { inDim }, DataType.Float, featuresName, null, false /*isSparse*/);
var label = Variable.InputVariable(new int[] { ouDim }, DataType.Float, labelsName, new List<CNTK.Axis>() { CNTK.Axis.DefaultBatchAxis() }, false);

As specified in the original tutorial: “Specifying the dynamic axes enables the recurrence engine handle the time sequence data in the expected order. Please take time to understand how to work with both static and dynamic axes in CNTK as described here, the dynamic axes is key point in LSTM.
Now the implementation is continue with the defining learning rate, momentum, the learner and the trainer.

 
var lstmModel = LSTMHelper.CreateModel(feature, ouDim, hiDim, cellDim, device, "timeSeriesOutput");

Function trainingLoss = CNTKLib.SquaredError(lstmModel, label, "squarederrorLoss");
Function prediction = CNTKLib.SquaredError(lstmModel, label, "squarederrorEval");


// prepare for training
TrainingParameterScheduleDouble learningRatePerSample = new TrainingParameterScheduleDouble(0.0005, 1);
TrainingParameterScheduleDouble momentumTimeConstant = CNTKLib.MomentumAsTimeConstantSchedule(256);

IList<Learner> parameterLearners = new List<Learner>() {
    Learner.MomentumSGDLearner(lstmModel.Parameters(), learningRatePerSample, momentumTimeConstant, /*unitGainMomentum = */true)  };

//create trainer
var trainer = Trainer.CreateTrainer(lstmModel, trainingLoss, prediction, parameterLearners);

Now the code is ready, and the 10 epochs should return acceptable result:

 
// train the model
for (int i = 1; i <= iteration; i++)
{
    //get the next minibatch amount of data
    foreach (var miniBatchData in nextBatch(featureSet.train, labelSet.train, batchSize))
    {
        var xValues = Value.CreateBatch<float>(new NDShape(1, inDim), miniBatchData.X, device);
        var yValues = Value.CreateBatch<float>(new NDShape(1, ouDim), miniBatchData.Y, device);

        //Combine variables and data in to Dictionary for the training
        var batchData = new Dictionary<Variable, Value>();
        batchData.Add(feature, xValues);
        batchData.Add(label, yValues);

        //train minibarch data
        trainer.TrainMinibatch(batchData, device);
    }

    if (this.InvokeRequired)
    {
        // Execute the same method, but this time on the GUI thread
        this.Invoke(
            new Action(() =>
            {
                //output training process
                progressReport(trainer, lstmModel.Clone(), i, device);
            }
            ));
    }
    else
    {
        //output training process
        progressReport(trainer, lstmModel.Clone(), i, device);

    }             
}

Model Evaluation

Model evaluation is implemented during the training process. In this way we can see the learning process and how the model is getting better and better.

Fore each minibatch the progress method is called which updates the charts for the training and testing data set.

void progressReport(Trainer trainer, Function model, int iteration, DeviceDescriptor device)
{
    textBox3.Text = iteration.ToString();
    textBox4.Text = trainer.PreviousMinibatchLossAverage().ToString();
    progressBar1.Value = iteration;

    reportOnGraphs(trainer, model, iteration, device);
}

private void reportOnGraphs(Trainer trainer, Function model, int i, DeviceDescriptor device)
{
    currentModelEvaluation(trainer, model, i, device);
    currentModelTest(trainer, model, i, device);
}

The following picture shows the training process, where the model evaluation is shown simultaneously, for the training and testing data set.
Also the simulation of the Loss value during the training is simulated as well.

As can be see the blog post extends the original Tutorial with some handy tricks during the training process. Also this demo is good strarting point for development bether tool for LSTM Time Series training. The full source code of this blog post, which shows much more implementation than presented in the blog post can be found here.

Deploy CNTK model to Excel using C#


In the last blog post, we saw how to save model state (checkpoint) in order to load it and train again. Also we have seen how to save model for the evaluation and testing. In fact we have seen how to prepare the model to be production ready.

Once we finish with the modelling process, we enter in to production phase, to install the model on place where we can use it for solving real world problems. This blog post is going to describe the process how deployed CNTK model can be exported to Excel like as AddIn and be used like ordinary Excel formula.

Preparing and deploying CNTK model

From the previous posts we saw how to train and save the model. This will be our starting point for this post.

Assume we trained and saved the model for evaluation from the previous blog post with file name as “IrisModel.model”. The model calculates Iris flower based on 4 input parameters, as we saw earlier.

  1. The first step is to create  .NET Class Library and install the following Nugget packages
    1. CNTK CPU Only ver. 2.3
    2. Excel Dna Addin
    3. Include saved IrisModel.model file in the project as Content and should be copied in Debug folder of the application.

As we can see, for this export we need Excel Dna Addin, fantastic library for making anything as Excel Addin. It can be install as Nuget package, and more information can be found at http://excel-dna.net/.

The following picture shows above 3 actions.

Once we prepare everything, we can start with the implementation of the Excel Addin.

  1. Change the Class.cs name into IrisModel.cs, and implement two methods:
    1. public static string IrisEval(object arg) and
    2. private static float EvaluateModel(float[] vector).

The first method is direct Excel function which will be called in the excel, and the second method is similar from the previous blog post for model evaluation. The following code snippet shows the implementation for the methods:

[ExcelFunction(Description = "IrisEval - Prediction for the Iris flower based on 4 input values.")]
public static string IrisEval(object arg)
{
    try
    {
        //First convert object in to array
        object[,] obj = (object[,])arg;

        //create list to convert values
        List<float> calculatedOutput = new List<float>();
        //
        foreach (var s in obj)
        {
            var ss = float.Parse(s.ToString(), CultureInfo.InvariantCulture);
            calculatedOutput.Add(ss);
        }
        if (calculatedOutput.Count != 4)
            throw new Exception("Incorrect number of input variables. It must be 4!");
        return EvaluateModel(calculatedOutput.ToArray());
    }
    catch (Exception ex)
    {
        return ex.Message;
    }

}
private static string EvaluateModel(float[] vector)
{
    //load the model from disk
    var ffnn_model = Function.Load(@"IrisModel.model", DeviceDescriptor.CPUDevice);

    //extract features and label from the model
    Variable feature = ffnn_model.Arguments[0];
    Variable label = ffnn_model.Output;

    Value xValues = Value.CreateBatch<float>(new int[] { feature.Shape[0] }, vector, DeviceDescriptor.CPUDevice);
    //Value yValues = - we don't need it, because we are going to calculate it

    //map the variables and values
    var inputDataMap = new Dictionary<Variable, Value>();
    inputDataMap.Add(feature, xValues);
    var outputDataMap = new Dictionary<Variable, Value>();
    outputDataMap.Add(label, null);

    //evaluate the model
    ffnn_model.Evaluate(inputDataMap, outputDataMap, DeviceDescriptor.CPUDevice);
    //extract the result  as one hot vector
    var outputData = outputDataMap[label].GetDenseData<float>(label);
    var actualLabels = outputData.Select(l => l.IndexOf(l.Max())).ToList();
    var flower = actualLabels.FirstOrDefault();
    var strFlower = flower == 0 ? "setosa" : flower == 1 ? "versicolor" : "virginica";
    return strFlower;
}<span style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" data-mce-type="bookmark" class="mce_SELRES_start"></span>

That is all we need for model evaluation in Excel.

Notice that the project must be build with the x64 architecture, and also installed Excel must be in x64 version.  This demo will not work in Excel 32 bits.

Rebuild the project and open Excel file with Iris Data set. You can use the file included in the demo project,  specially prepared for this blog post.

  • Got to Excel – > Options -> Addins,
    • Install the ExportCNTKToExcel-AddIn64-packed addin file.

  • Start typing the Excel  formula :
    • IrisEval(A1:D1) and press Enter. And the magic happen.

 

Complete source code for this blog post can be found here.

 

How to save CNTK model to file in C#


Final process of training is the model which should be used in the production as safe, reliable and accurate. Usually model training is frustrating and time consuming process, and it is not like we can see as demo to introduce with the library. Once the model is built with right combination of parameter values and network architecture, the process of modelling turn in to interesting and funny task, since it calculates the values just as we expect.

In most of the time, we have to save the current state of the model, and continue with training due to various reasons:

  • to change the parameters of the learner
  • to switch from one to another machine,
  • or to share the state of the model with your team mate,
  • to switch from CPU to GPU,
  • etc

In all above cases the current state of the model should be saved, and continue the training process from the last stage of the model. Because, the training from the beginning is not a solution, since we already invest time and knowledge to achieve progress of the model building.

The CNTK supports two kind of persisting the model.

  • production/ready or evaluation ready state, and
  • saving the checkpoint of the model for later training.

In the first case, the model is prepare for the evaluation and production but cannot be trained again, because it is freed from all other information but for the evaluation. During the saving process, only one files is generated.

In the second case, beside a model file, another file is generated with the name “modelname.ckp”. The file contains all information needed for the continuation of training.  Once the trainer  checkpoint is persisted we can continue with model training even if we changed the following:

  • the training data set with the same dimensions and data types
  • the parameters of the learner,
  • the learner

What we cannot change in order to continue with training is the the network model. In other words, the model must remain with the same number of layers, input and output dimensions.

Saving, loading and evaluating the model

Once the model is trained it can be persisted as separated file. As separate file, it can be loaded and evaluated with different dataset, but the number of the features and the label must remain the same as in case when was trained. Use this method when you want to share the model with someone else, or when you want to deploy the model in the production.

The model is saved simply by calling the CNTK method  Save:

public void SaveTrainedModel(Function model, string fileName)
{
    model.Save(fileName);
}

The model evaluation requires several steps:

  • load the model from the file,
  • extract the features and label from the model
  • call evaluate method from the model, by passing the batch created from the features, label and the evaluation dataset.

The model is loaded by calling Load method.

public Function LoadTrainedModel(string fileName, DeviceDescriptor device)
{
   return Function.Load(fileName, device, ModelFormat.CNTKv2);
}

Once the model is loaded, features and label are extracted from the model on the following way:

//load the mdoel from file
Function model = Function.Load(modelFile, device);
//extract features and label from the model
Variable feature = ffnn_model.Arguments[0];
Variable label = ffnn_model.Output;

The next step is creating the minibatch in order to pass the data to the evaluation.In this case we are going to create only one row for the Iris example of:

//Example: 5.0f, 3.5f, 1.3f, 0.3f, setosa
float[] xVal = new float[4] { 5.0f, 3.5f, 1.3f, 0.3f };
Value xValues = Value.CreateBatch<float>(new int[] {feature.Shape[0] }, xVal, device);
//Value yValues = - we don't need it, because we are going to calculate it

Once we created the variable and values we can map them and pass to the model evaluation, and calculate the result:

//map the variables and values
var inputDataMap = new Dictionary<Variable, Value>();
inputDataMap.Add(feature,xValues);
var outputDataMap = new Dictionary<Variable, Value>();
outputDataMap.Add(label, null);
//evaluate the model
ffnn_model.Evaluate(inputDataMap, outputDataMap, device);
//extract the result  as one hot vector
var outputData = outputDataMap[label].GetDenseData<float>(label);

The evaluation result should be transformed to proper format, and compared with expected result:

//transforms into class value
var actualLabels = outputData.Select(l => l.IndexOf(l.Max())).ToList();
var flower = actualLabels.FirstOrDefault();
var strFlower = flower == 0 ? "setosa" : flower == 1 ? "versicolor" : "versicolor";
Console.WriteLine($"Model Prediction: Input({xVal[0]},{xVal[1]},{xVal[2]},{xVal[3]}), Iris Flower={strFlower}");
Console.WriteLine($"Model Expectation: Input({xVal[0]},{xVal[1]},{xVal[2]},{xVal[3]}), Iris Flower= setosa");

Training previous saved model

Training previously saved model is very simple, since it requires no special coding. Right after the trainer is created with all necessary stuff (network, learning rate, momentum and other),
you just need to call

 trainer.RestoreFromCheckpoint(strIrisFilePath);

No additional code should be added.
The above method is called, after you successfully saved the model state by calling

trainer.SaveCheckpoint(strIrisFilePath);

The method is usually called at the end of the training process.
Complete source code from this blog post can be found here.

Testing and Validation CNTK models using C#


…continue from the previous post.
Once the model is build and Loss and Validation functions are satisfied our expectation, we need to validate and test the model using the data which was not part of the training data set (unseen data). The model validation is very important because we want to see if our model is trained well,so that can evaluates unseen data approximately same as the training data. Otherwise the model which cannot predict the output is called overfitted model. Overfitting can happen when the model was trained long enough that shows very high performance for the training data set, but for the testing data evaluate bad results.
We will continue with the implementation from the prevision two posts, and implement model validation. After the model is trained, the model and the trainer are passed to the Evaluation method. The evaluation method loads the testing data and calculated the output using passed model. Then it compares calculated (predicted) values with the output from the testing data set and calculated the accuracy. The following source code shows the evaluation implementation.

private static void EvaluateIrisModel(Function ffnn_model, Trainer trainer, DeviceDescriptor device)
{
    var dataFolder = "Data";//files must be on the same folder as program
    var trainPath = Path.Combine(dataFolder, "testIris_cntk.txt");
    var featureStreamName = "features";
    var labelsStreamName = "label";

    //extract features and label from the model
    var feature = ffnn_model.Arguments[0];
    var label = ffnn_model.Output;

    //stream configuration to distinct features and labels in the file
    var streamConfig = new StreamConfiguration[]
        {
            new StreamConfiguration(featureStreamName, feature.Shape[0]),
            new StreamConfiguration(labelsStreamName, label.Shape[0])
        };

    // prepare testing data
    var testMinibatchSource = MinibatchSource.TextFormatMinibatchSource(
        trainPath, streamConfig, MinibatchSource.InfinitelyRepeat, true);
    var featureStreamInfo = testMinibatchSource.StreamInfo(featureStreamName);
    var labelStreamInfo = testMinibatchSource.StreamInfo(labelsStreamName);

    int batchSize = 20;
    int miscountTotal = 0, totalCount = 20;
    while (true)
    {
        var minibatchData = testMinibatchSource.GetNextMinibatch((uint)batchSize, device);
        if (minibatchData == null || minibatchData.Count == 0)
            break;
        totalCount += (int)minibatchData[featureStreamInfo].numberOfSamples;

        // expected labels are in the mini batch data.
        var labelData = minibatchData[labelStreamInfo].data.GetDenseData<float>(label);
        var expectedLabels = labelData.Select(l => l.IndexOf(l.Max())).ToList();

        var inputDataMap = new Dictionary<Variable, Value>() {
            { feature, minibatchData[featureStreamInfo].data }
        };

        var outputDataMap = new Dictionary<Variable, Value>() {
            { label, null }
        };

        ffnn_model.Evaluate(inputDataMap, outputDataMap, device);
        var outputData = outputDataMap[label].GetDenseData<float>(label);
        var actualLabels = outputData.Select(l => l.IndexOf(l.Max())).ToList();

        int misMatches = actualLabels.Zip(expectedLabels, (a, b) => a.Equals(b) ? 0 : 1).Sum();

        miscountTotal += misMatches;
        Console.WriteLine($"Validating Model: Total Samples = {totalCount}, Mis-classify Count = {miscountTotal}");

        if (totalCount >= 20)
            break;
    }
    Console.WriteLine($"---------------");
    Console.WriteLine($"------TESTING SUMMARY--------");
    float accuracy = (1.0F - miscountTotal / totalCount);
    Console.WriteLine($"Model Accuracy = {accuracy}");
    return;

}

The implemented method is called in the previous Training method.

 EvaluateIrisModel(ffnn_model, trainer, device);

As can be seen the model validation has shown that the model predicts the data with high accuracy, which is shown on the following picture.

This was the latest post in series of blog posts about using Feed forward neural networks to train the Iris data using CNTK and C#.

The full source code for all three samples can be found here.

Train Iris data by Batch using CNTK and C#


In the previous post we have seen how to train NN model by using MinibatchSource. Usually we should use it when we have large amount of data. In case of small amount of the data, all data can be loaded in memory, and all can be passed to each iteration in order to train the model. This blog post will implement this kind of feeding the trainer.
We will reused the previous implementation, so the starting point can be previous source code. For data loading we have to define a new method. The Iris data is stored in text format like the following:

sepal_length,sepal_width,petal_length,petal_width,species
5.1,3.5,1.4,0.2,setosa(1 0 0)
7.0,3.2,4.7,1.4,versicolor(0 1 0)
7.6,3.0,6.6,2.1,virginica(0 0 1)
...

The output column is encoded to 1-N-1 encoding rule we have seen previously.
The method will read all the data from the file, parse the data and create two float arrays:

  • float[] feature, and
  • float[] label.

As can be seen both arrays are 1D, which means all data will be inserted in 1D, because the CNTK requires so.  Since the data is in 1D array, we should also provide the dimensionality of the data so te CNTK can resolve what values for each features. The following listing shows the loading Iris data in two 1D array returned as tuple.

static (float[], float[]) loadIrisDataset(string filePath, int featureDim, int numClasses)
{
    var rows = File.ReadAllLines(filePath);
    var features = new List<float>();
    var label = new List<float>();
    for (int i = 1; i < rows.Length; i++)
    {
        var row = rows[i].Split(',');
        var input = new float[featureDim];
        for (int j = 0; j < featureDim; j++)
        {
            input[j] = float.Parse(row[j], CultureInfo.InvariantCulture);
        }
        var output = new float[numClasses];
        for (int k = 0; k < numClasses; k++)
        {
            int oIndex = featureDim + k;
            output[k] = float.Parse(row[oIndex], CultureInfo.InvariantCulture);
        }

        features.AddRange(input);
        label.AddRange(output);
    }

    return (features.ToArray(), label.ToArray());
}

Once the data is loaded we should change very little amount of the previous code in order to implement batching instead of using minibatchSource. At the beginning we provides several variable to define the NN model structure. Then we call the loadIrisDataset, and define xValues and yValues, which we use in order to create feature and label input variables. Then we create dictionary which connect the feature and labels with data values which we will pass to the trainer later.
The next code is the same as in the previous version in order to create NN model, Loss and Evaluation functions, and learning rate.

Then we create loop, for 800 iteration. Once the iteration reaches the maximum value the program outputs the model properties and terminates.
Above said it implemented in the following code.

public static void TrainIriswithBatch(DeviceDescriptor device)
{
    //data file path
    var iris_data_file = "Data/iris_with_hot_vector.csv";

    //Network definition
    int inputDim = 4;
    int numOutputClasses = 3;
    int numHiddenLayers = 1;
    int hidenLayerDim = 6;
    int sampleSize = 130;

    //load data in to memory
    var dataSet = loadIrisDataset(iris_data_file, inputDim, numOutputClasses);

    // build a NN model
    //define input and output variable
    var xValues = Value.CreateBatch<float>(new NDShape(1, inputDim), dataSet.Item1, device);
    var yValues    = Value.CreateBatch<float>(new NDShape(1, numOutputClasses), dataSet.Item2, device);

    // build a NN model
    //define input and output variable and connecting to the stream configuration
    var feature = Variable.InputVariable(new NDShape(1, inputDim), DataType.Float);
    var label = Variable.InputVariable(new NDShape(1, numOutputClasses), DataType.Float);

    //Combine variables and data in to Dictionary for the training
    var dic = new Dictionary<Variable, Value>();
    dic.Add(feature, xValues);
    dic.Add(label, yValues);

    //Build simple Feed Froward Neural Network model
    // var ffnn_model = CreateMLPClassifier(device, numOutputClasses, hidenLayerDim, feature, classifierName);
    var ffnn_model = createFFNN(feature, numHiddenLayers, hidenLayerDim, numOutputClasses, Activation.Tanh, "IrisNNModel", device);

    //Loss and error functions definition
    var trainingLoss = CNTKLib.CrossEntropyWithSoftmax(new Variable(ffnn_model), label, "lossFunction");
    var classError = CNTKLib.ClassificationError(new Variable(ffnn_model), label, "classificationError");

    // set learning rate for the network
    var learningRatePerSample = new TrainingParameterScheduleDouble(0.001125, 1);

    //define learners for the NN model
    var ll = Learner.SGDLearner(ffnn_model.Parameters(), learningRatePerSample);

    //define trainer based on ffnn_model, loss and error functions , and SGD learner
    var trainer = Trainer.CreateTrainer(ffnn_model, trainingLoss, classError, new Learner[] { ll });

    //Preparation for the iterative learning process
    //used 800 epochs/iterations. Batch size will be the same as sample size since the data set is small
    int epochs = 800;
    int i = 0;
    while (epochs > -1)
    {
        trainer.TrainMinibatch(dic, device);

        //print progress
        printTrainingProgress(trainer, i++, 50);

        //
        epochs--;
    }
    //Summary of training
    double acc = Math.Round((1.0 - trainer.PreviousMinibatchEvaluationAverage()) * 100, 2);
    Console.WriteLine($"------TRAINING SUMMARY--------");
    Console.WriteLine($"The model trained with the accuracy {acc}%");
}

If we run the code, the output will be the same as we got from the previous blog post example:

The full source code with training data can be downloaded
here.

Using SSH Keys for License generation and validation in .NET Applications


license_generationpng

This blog post is going to present how can you implement license functionality in your .NET application. Providing license in your .NET application is very challenging because there is no standard procedure for the implementation. You are free to use whatever you want. But be notice, there is no license which is 100% safe and cannot be cracked or bypassed.

For this purpose I have selected the CocoaFob library for registration code generation and verification in Objective-C applications. Mainly the library is for Objective -C based applications, like iQS mobile applications and other OSX based applications.  This is very interesting library but you cannot use it in .NET applications, there is no implementation for .NET Framework.

The library uses DSA to generate registration keys, which is very hard for hackers to produce key generators. The library is also specific because it generates license key in human readable form, when the bytes are converted in to Base32 string to avoid ambiguous characters. It also groups codes in sets of five characters separated by dashes. Also DSA has encryption algorithm generates the license which is different every time because of a random element introduced during the process.

So the License key is produced using a 512-bit DSA key looks like on the following sample:

GAWQE-FCUGU-7Z5JE-WEVRA-PSGEQ-Y25KX-9ZJQQ-GJTQC-CUAJL-ATBR9-WV887-8KAJM-QK7DT-EZHXJ-CR99C-A

More information about CocoaFob can be found at GitHub page: https://github.com/glebd/cocoafob

The library is using BouncyCastle.Crypto Nuget package for DSA encryption and decryption.

The library CocoaFob for .NET contains two classes:

  1. LicenseData class which provide License properties which is used in license generation. It an be anything: Name, Product number, email, date of expiration etc.
  2. LicenseGenerator  class which is responsible for encrypting and validating the license.

For this blog post the License data class has the flowing implementation:

public class LicenseData
{

protected internal string productCode;
protected internal string name;
protected internal string email;

public virtual string toLicenseStringData()
{
 StringBuilder result = new StringBuilder();
 if (productCode != null)
 {
 result.Append(productCode);
 result.Append(',');
 }

 //name is mandatory property
 if (name == null)
 throw new System.Exception("name cannot be null");
 result.Append(name);

 if (email != null)
 {
 result.Append(',');
 result.Append(email);
 }
 return result.ToString();
}
......
}

As can be seen from the code snippet above the License data contains username, product key and email address. Also, only name property is mandatory, which means you can generate license key based on the user name only.

Generating the License Key

One we have License data we can process of License key generation. License is generated using DSA encryption which uses SSH private key.  You can generate public and private SSH keys  using any of the available tools, eg. OpenSSH, GitHub bash, …. More information about private and public key generation you can find at this link. Once we have public and private keys we can generate license and validate it. One important thing to remember is that you have to care about your private key. It should always be secure and no one should have access to it.

The public key is used for license validation, and it is usualy packed with the application as a part of the deployment stuff. So the process of generating the license is show on the flowing code snippet:

public string makeLicense(LicenseData licenseData)
{
     if (!CanMakeLicenses)
     {
       throw new System.InvalidOperationException("The LicenseGenerator cannot make licenses as it was not configured with a private key");
     }
     try
     {
        //
        var dsa = SignerUtilities.GetSigner("SHA1withDSA");
        dsa.Init(true, privateKey);
        //
        string stringData = licenseData.toLicenseStringData();
        byte[] licBytes = Encoding.UTF8.GetBytes(stringData);
        dsa.BlockUpdate(licBytes, 0, licBytes.Length);
        //
        byte[] signed = dsa.GenerateSignature();
        string license = ToLicenseKey(signed);
        return license;
    }
    catch (Exception e)
    {
        throw new LicenseGeneratorException(e);
    }
}

First the DSA encryption is created based on the publicKey we have provided as an argument. Then licBytes is generated from the License data, and converted in to UTF8 formatted bytes. Then we have update DSA provider with licBytes. Now the DSA provider can generate signature in bytes. The signature is converted in to LicenseKey by calling ToLicenseKey method. The method is shown on the following code snippet:

private string ToLicenseKey(byte[] signature)
{
    /* base 32 encode the signature */
    var result = Base32.ToString(signature);

    /* replace O with 8 and I with 9 */
    result = result.Replace("O", "8").Replace("I", "9");

    /* remove padding if any. */
    result = result.Replace("=", "");
           

    /* chunk with dashes */
    result = split(result, 5);
    return result;
}

The magic happen in this method during the conversion of signature from bytes to human readable string. Conversion is done using Base32 string helper method.

Verify the License Key

The License verification process is defined in varifyLicense method. You have to provide SSH publicKey as well as

public virtual bool verifyLicense(LicenseData licenseData, string license)
{
	if (!CanVerifyLicenses)
	{
		throw new System.InvalidOperationException("The LicenseGenerator cannot verify licenses as it was not configured with a public key");
	}
    try
     {
        //Signature dsa = Signature.getInstance("SHA1withDSA", "SUN");
        var dsa = SignerUtilities.GetSigner("SHA1withDSA");
        dsa.Init(false, publicKey);

        //
        string stringData = licenseData.toLicenseStringData();
        byte[] msgBytes = Encoding.UTF8.GetBytes(stringData);
        dsa.BlockUpdate(msgBytes, 0, msgBytes.Length);


        var dec = FromLicenseKey(license);
        var retVal = dsa.VerifySignature(dec);
        //
        return retVal; 
	}
    catch (Exception e)
    {
        throw new LicenseGeneratorException(e);
    }
}

As can be seen from the code above, the validation process is done by generating licenseData, converting the license Key in to signatere and the validation process return true is the license is valid, otherwize return false.

The whole project is published at git hub, an can be downloaded from http://github.com/bhrnjica/cocoafob

 

Testing the Library

The Library solution contains unit test project which you can see how to use this library in the real scenario in order to implement licensing in .NET app.

Happy programming!